The Stet.io backend is now written using
the Quart framework
having been converted from
the Flask framework.
This was quite an easy transition, requiring instances
of flask
to be replaced with quart
and
then the keywords async await
to be added. The
keywords were required almost exclusively in the view functions,
async def view():
return await render_template(...)
This should make Stet able to handle around double the requests for the same infrastructure (although it isn't currently close to the limits :( ). It should also result in all the pages and data being served via the HTTP/2 protocol, which should make the page load time noticeably faster.
The Stet.io website exlcuding the editor is often termed the backend. The backend of Stet.io is written in Python3 using the Flask framework. The backend and editor communicate via AJAX requests, sending JSON formatted data. These requests include a authentication token stored on the cookie, which can be hard to test using the Flask framework.
The issue comes down to the use of the test_request_context
alongside cookies.
Using a Flask test-client is easy, via the set_cookie
method.
To use the test_request_context
, set the cookie as below,
The Stet.io editor is written in Javascript with the pages are served from a Python backend. Python is also my language of choice which has lead to the tooling around the Javascript editor being written in Python. This is not an ideal solution, and following advances in Javascript tooling it is time to upgrade.
The old system was based on Python {Paver} and Javascript {Uglify-JS, Uglify-CSS, JSHint} as in the following snippet. Where files is a list of the .js files to build and lint,
@task
def lint():
sh('jshint ' + ' '.join(files))
@task
def build_css(files, target, options=[]):
sh("uglifycss %s > %s" % (' '.join(files), target))
@task
def build_js(files, target, options=[]):
sh("uglifyjs %s -o %s" % (' '.join(files), target))
This system has the downsides of requiring global npm installs that were not version controlled alongside mixing the tooling and development langauges. Additionally the test setup, using mocha, was difficult to use (not command line based) and therefor poorly maintained.
I'm choosing to modernize by using NPM and ES6, i.e. fully Javascript tooling. To begin with it seems setting up the package.json is key, ignoring some details I've gone with,
{
"scripts": {
"build": "webpack --config webpack.config.js",
"lint": "eslint src/**/*.js test/**/*.js",
"test": "mocha --compilers js:babel-core/register test/**/*.spec.js"
},
"private": true,
"devDependencies": {
"babel-core": "^6.18.2",
"babel-loader": "^6.2.8",
"babel-preset-es2015": "^6.18.0",
"chai": "^3.5.0",
"eslint": "^3.10.2",
"mocha": "^3.2.0",
"webpack": "^1.13.3"
},
"babel": {
"presets": [
"es2015"
]
}
}
Structured such that the source and test code are in the src and test sub directories respectively.
I'm also following the convention that the test code mirrors the source code structure and filenames, only with the .spec.js
suffix.
Note that I've also switched from JSHint to ESLint for my linting needs, as the latter seems to support ES6 better.
The build system uses webpack, as configured below, note the heavy use of babel to avoid ES6 compatibility issues. Ideally I'd not bother with this, but UglifyJS and Mocha both struggle with the syntax mandating its use.
module.exports = {
entry: "./src/index.js",
filename: "build/filters.min.js";
module: {
loaders: [{
test: /\.js?$/,
loader: 'babel-loader',
query: {
presets: ['es2015']
}
}]
},
output: {
filename: "filters.js"
},
plugins: [
new webpack.optimize.UglifyJsPlugin({minimize: true})
]
}
The very last piece of config is for ESLint, which I've setup as below. The unused args pattern is particularly useful for my Python background as it allows `_` prefix args to be ignored. The only other key is the sourceType setting in order to use ES6 module import syntax (although I've not used it in the webpack setup).
{
"env": {
"browser": true,
"es6": true
},
"extends": "eslint:recommended",
"parserOptions": {
"sourceType": "module"
},
"rules": {
"indent": [
"error",
2
],
"linebreak-style": [
"error",
"unix"
],
"no-unused-vars": [
"error",
{ "argsIgnorePattern": "^_" }
],
"quotes": [
"error",
"double"
],
"semi": [
"error",
"always"
]
}
}
Now instead of paver build, paver lint
I have npm run build, npm run lint, npm run test
which I consider to be an improvement.
Aside from the improvments to development this has allowed much improved test coverage and a host of bug fixes that go live today with 0.7.2.
Performance is a key aspects of Stet.io, especially when it comes to the the computationally expensive filters. To improve this Stet.io makes use of a thread pool in order to parallelise the computation and keep the UI responsive. This thread pool uses web workers, as shown in the simplified gist below:
This is just one of the areas that are being improved to improve the overall experience of Stet.io.
Stet.io has always had functionality to export your image or save your image structure to the cloud. Stet.io now makes it possible to save your image structure locally. This allows you to edit an image, save it, come back later and edit it again with all the layers as you left them. Simply click the (save) button and choose to save or load. The images are saved in your browser and will be available whenever you visit Stet.io.
This works by making use of HTML5 local storage, like so:
window.localStorage.putItem('stet', canvas.toDataURL('png'));
// To load
image.src = window.localStorage.getItem('stet');
canvas.drawImage(image);
The image data is saved in your browser, meaning that no data leaves your computer. This is good for privacy, but prevents editing on another computer or even in a different browser. To do so consider using Stet.io's cloud storage instead.
The Stet.io website exlcuding the editor is often termed the backend. The backend of Stet.io is written in Python3 using the Flask framework. The backend and editor communicate via AJAX requests, sending JSON formatted data. JSON is well supported both in Flask/Python and in the web browser, making it an obvious choice.
Choosing JSON does however lead to one problem, which is how to easily test Flask-routes that expect and return JSON. The default Flask TestClient (Flask-0.10) does not have any JSON support, therefore I use the code below.
It is possible that functionality similar to that above will be present in the next release of Flask. Keep an eye on Flask-#1416 to find out, note that this pull request is not associated with Stet.io, but is provided for reference.
Stet.io is now a year old, and with the milestone comes a new focus, away from the storage and on to the editor. Those of you with Stet.io accounts will be able to carry on as before, but those without will now need an invite to use the storage.
There are a two main goals we now have for the editor, firstly to make it work better on mobile devices and secondly to be more reliable. You may have noticed the mobile interface change recently, the previous specialised editor is gone, in its place is the full editor with a simplified interface. This should result in a better user experience and make it easier for us to fix bugs.
The Stet.io editor is continualy improving with minor tweaks here and there. Recently, however, it was redesigned with a new branding and some changes to the user experience. This redesign hopefully makes the editor more pleasant and easier to use.
First an overview, before the redesign version 0.3.X of the Stet.io editor looked like this (see right or larger). The general feedback was negative, the editor looked and felt to complicated (so many icons) and it looked a bit bland. More specifically the icons that users look for first and expect to be top left (load, save) are in fact on the right hidden below the tabs. The quickstart popup helps, but only slightly.
There were two aims for the redesign, improve the UI to make Stet.io stand out and to improve the UX by organising the icons. The first was influenced by this proposed redesign for photoshop, whilst the latter has been on the todo-list for some time.
The first action was to choose a colour scheme and base dimension i.e. size of the icons. There are quite a few flat UI colour scheme websites that recommend swatches. For Stet.io the background colors are #262E30, first accent #3B494C, second accent #6C7A89 and highlight #95A5A6, colors are #E2E2E2 and #008080. The latter two have been choosen with a reduced contrast in mind, so as to reduce eye strain. The swatch is shown below, in the same order as written
As for the base dimension 36px was chosen as the largest size that didn't overly reduce the available editing space. Equally as before all the icon buttons will be square, with the icons themselves staying the same size. This gives the icons a metro like style.
To improve the user experience, the load + save, copy + paste, undo + redo have been moved to the top left in that order. This follows almost every other application in existance. Secondly the icons have been grouped into tools, transformations, views and settings.
A boost curve or a secondary curve, allows the transformation of the input saturation or luminosity as a function of the hue as specified by the boost-curve. For example to desaturate all hues except red, a boost curve that is negative for all hues apart from red can be used.
Boost curves are defined as a positive or negative boost (about the existing value) as a function of the hue. The boost is represented along the x axis and the hue along the y axis. The central boost values are shown in the relevant hue.
The boost curve transformation is coded as below assuming the boost method returns the curve value for the given hue. (See the stet image representation if this isn't clear).
for(var byte in image)
{
hsl = rgb_to_hsl([image[byte], image[byte + 1], image[byte + 2]]);
hsl.saturation += boost(hsl.hue);
rgb = hsl_to_rgb(hsl);
image[byte] = rgb[0];
image[byte + 1] = rgb[1];
image[byte + 2] = rgb[2];
}
Photos edited in Stet.io can be cropped using the crop tool, to a fixed, free or auto size. Cropping to a free size is the default, and activated by clicking and dragging over the photo to choose the area to crop. As you drag the parts of the image to be cropped out are greyed out.
The auto crop button, , will automatically crop the image to the minimum size required to display the image. This is often useful after shrinking, transforming or editing a photo.
Finally a fixed size crop can be activated by typing in a width, height or both and selecting the relevant unit. For example a typical 7"x5" photo crop can be activated by typing in 7, 5 and selecting inches. Moving the mouse over the image will allow you to select the area of the photo to be cropped.
Images are stored and edited in terms of pixels rather than inches or centimeters. Therefore inches/centimeters are converted into a pixel size automatically by Stet.io using a Dots Per Inch, DPI value. If you want to print your photo it is important to match the DPI value used in Stet.io to that of the printer by clicking and chosing the correct value. The default DPI value in Stet.io is 300.
Stet.io has just been reviewed by I Love Free Software, read more by clicking the link below.
Image noise is a random variation of a pixels colour, that degrades the image quality. This degradation is often a desired effect in order to make an image look older or taken by a different camera, however is is mostly an unwanted artefact of the image. Stet.io has filters to both add and remove noise from images, as explained below.
Noise reduction can be achieved by running a mean or median filter on the image, depending on the type of noise present. Gaussian like noise, is best removed by the mean filter whereas salt and pepper like noise is best removed by the median filter. These are the noise generation and reduction filters are present in Stet.io.
Gaussian noise is a Gaussian random variation of a pixels colour from the true value that is present in every pixel.
This is coded as below, assuming the Gaussian(mean, variation)
function returns a Gaussian distributed random number distributed about the mean value.
(See the stet image representation if the code isn't clear).
for(var byte in image)
{
image[byte] = Gaussian(image[byte], variation);
image[byte + 1] = Gaussian(image[byte + 1], variation);
image[byte + 2] = Gaussian(image[byte + 2], variation);
}
Gaussian noise is best reduced by replacing the value of a pixel with the mean or average of the pixel's neighbours.
Salt and pepper noise is so called due to the present of white (salt) and black (pepper) pixels randomly in the image. In this form of noise the pixel is either white, black or the true value as shown below.
for(var byte in image)
{
if(random < noise_probability)
{
if(random2 < 0.5)
{
image[byte] = 255;
image[byte + 1] = 255;
image[byte + 2] = 255;
}
else
{
image[byte] = 0;
image[byte + 1] = 0;
image[byte + 2] = 0;
}
}
}
Salt and pepper noise is best reduced by replacing the value of a pixel with the median of the pixel's neighbours.
Level adjustment transforms the input to a new output as specified by minimum, mid-point and maximum tones.
Inputs below the minimum are transformed to 0, those above the maximum to 255 and those inbetween are gamma adjusted depending on the mid-point, gamma = 2 * mid_point / (max - min)
.
Level adjustment allows the tone of the image to be improved, by utilising the colour range more efficiently. This is why the colour histograms are displayed, as it helps guide level adjustment. For example an image with a histogram that has no values around 0 brightness (left of the histogram), should be adjusted with a higher minimum point. This will stretch the tones over a greater range of brightness by utilising the range below the minimum point thereby, increasing the contrast and improving the visibility. Equally decreasing the maximum point for an image lacking bright pixels increases the contrast by utilising the range above hte maximum point.
The tones inbetween the maximum and minimum points are gamma adjusted between the minimum and maximum points, with the gamma value set by the mid point. Adjusting the mid point will enhance the contrast of brighter tones if moved to the left at the expense of decreased contrast of the darker tones. Therefore the mid point is best moved towards sparsely populated histogram regions.
Level adjustment is coded as below. (See the stet image representation if this isn't clear).
function adjust(input)
{
if(input < min)
return 0;
else if(input > max)
return 255;
else
return ((input - min) / (max - min)) ^ gamma;
}
for(var byte in image)
{
image[byte] = adjust(image[byte]);
image[byte + 1] = adjust(image[byte + 1]);
image[byte + 2] = adjust(image[byte + 2]);
}
Layer masks allow localised pixels of a layer to be exlcuded from the overall image, without deleting those pixels from the layer. The mask is best understood as a mask on top of the layer that prevents the pixels below from being seen. Therefore white pixels in the mask allow the pixels below to be seen, black prevents them and grey alters the opacity of the pixel. Masks are typically used to enable non-destructive editing, which will be a future blog post.
The mask operation can be considered as a alpha channel adjustment to the layer, as below. (See the stet image representation if this isn't clear).
for(var byte in image)
{
image[byte + 3] = mask[byte + 3];
}
To add a mask to a raster layer of the image, click . The mask is then the right thumbnail of the pair. Clicking again will apply the mask to the layer, and delete the invisible pixels - this is a destructive action.
Curve adjustment transforms the input shade to a new output shade as specified by the curve.
For example the invert curve inverts the shade of each colour by applying the operation
output = 256 - input
therefore inverting the values.
Curve adjustment is more powerful than many of the other transformation techniques as each channel can have an individual curve and each curve can consist of many control points. For example the preset warmer curves splits the red and blue channels, whilst leaving green unchanged. The red shades are transformed to be more brighter and the blue darker warming the image.
Other common curves include increasing (and descreasing) the contrast. These curves work by transforming shades away (and towards) the midpoint shade, 128 by adding control points above and below the linear line. To increase contrast input shades below the midpoint should be transformed further from the midpoint, hence a control point is added below the linear line, the opposite is true for input shades above the midpoint. Higher constrast enchancement can be achieved by moving the control points further from the linear line in both directions.
The curve transformation is coded as below assuming the _curve functions return the output shade given the input shade. (See the stet image representation if this isn't clear).
for(var byte in image)
{
image[byte] = red_curve(image[byte]);
image[byte + 1] = blue_curve(image[byte + 1]);
image[byte + 2] = green_curve(image[byte + 2]);
}
Stet.io is a repository for your images, as every image is stored under version control. What defines a version is up to you, save as many or as few as is best suited for your work. The only constraint is that the original is always the first version.
Version control is a record of a image's history and a backup in case things go wrong. It is often done manually by saving each new version of the image as a different file e.g. photo_v1, photo_v2 or by saving only the previous version and the current one e.g. photo_old and photo. The version control on stet.io does this automatically and additionally stores the parent of each version.
The red eye effect occurs when a strong light source, typically a flash, is reflected off the retina in the picture, giving a red rather than black pupil. As the reflection is specular, it requires the flash source to be near the lense. The red eye effect is therefore best eliminated by changing the lighting of the picture, for example by using ambient light rather than a flash. If this can't be done, it can be removed in stet.io.
To remove red eye(s) from an image, visit the editor and import your image. Then select the red eye remove tool. Now click on the red eye you'd like to remove. You might optionally have to change the size of the tool to fit the eye.
The red eye effect is removed by setting the value of the red channel to the average of the green and blue channels. This helps preserve the intensity of the pixel, avoiding a 'dead-eye' effect. This operation is simply coded as, using the stet image representation,
for(var byte in region)
{
var green = image[byte + 1];
var blue = image[byte + 2];
var red = (green + blue) / 2;
image[byte] = red;
}
Images edited in stet are represented in true colour or 32 bit RGBA per pixel. This means there are 8 bits per Red, Green and Blue channels and therefore 256^3 colours per pixel. Additionally the 8 bits per pixel used for the alpha channel defines the opacity or transparency of the pixel. Progammatically the bytes are accessed in the RGBA order per pixel, such as
for(var byte in image)
{
var red = image[byte];
var green = image[byte + 1];
var blue = image[byte + 2];
var alpha = image[byte + 3];
}
Stet.io's free powerful new image editor requires no installation, and works in all modern browsers. Try it now. This editor is based on the relatively new HTML5 and CSS3 standards specifically the canvas element.
Stet.io's subscription cloud storage, allows you store store and access your images on any device anywhere you have internet access. Your images are securely stored and always belong to you.