This project in progress is a creation inspired by my passion for fashion and ecommerce. I am currently building a site that acts as a personalized news feed for retail items at your favorite online shops. It will additionally support background replacement with user selected skin tones, and hopefully also placement manipulation by the user on the browser.

The web application will be built using Ruby on Rails, with the image segmentation algorithm in C++ (particularly opencv). The weekly updates to the progress on the project can be found below on my blog. Stay tuned for more details!

See the source code for the background replacement algorithm in progress, and the web application here.

Blog Entries

December 8, 2016.

These last couple of weeks I've been focusing on tidying up the image segmentation algorithm and testing it with some common themed retail images, as well as developing the backend structure of the site. Integration of the C++ code into the Ruby on Rails application is proving to be more difficult than initially thought. Below are some of the results of the segmentation algorithm.

Below is a screenshot of what a user's feed page looks like with the image segmentation feature applied to all images. Note that this image is not accurate because a user will most likely choose a single skin color and all the images will result in the same background color.

Downfalls (still exist!)

This algorithm still contains some downfalls.

  • Filling in parts of the foreground. This issue hasn't really been solved because it's due to the kmeans clustering. Some images simply aren't going to be segmented well using kmeans.
  • Gradients. This algorithm is still weak on gradients. An alternative method will probably have to be explored.
  • Overall, this method is a lot better than the previous, but still with improvements to be made.

November 27th, 2016.

This week I've been focusing on fleshing out the backend of the site. User models and data has been added to the site and database interactions have been defined. I plan to start the integration of the C++ code into the Ruby on Rails project by inputing sample images for sample shops first. Then, users will be able to upload their desired image for the skin color background replacement and see the algorithm in action.

Below is a screenshot of what a user's feed page looks like with the image segmentation feature applied to all images. Note that this image is not accurate because a user will most likely choose a single skin color and all the images will result in the same background color.

OCTOBER 25th, 2016.

This week I've applied the Gaussian blur to the edges of the foreground and I have combined the two algorithms to create a more effective and selective algorithm.

BFS from automatically selected points

Previously, I was iterating through all unvisited pixels as potential BFS starting points. This lead to parts of the foreground being filled, as discussed. I decided it was better to leave parts of the background untouched rather than incorrectly filling in the foreground with the background color. Therefore, I used the logic from my original, linear algorithm, where the starting point of the background filling begins at one of the 4 corners that are determined to be the most likely backgorund color. Then the filling of the background is done via the BFS traversal, as in my second algorithm, which found the edges of the foreground much more effectively and quickly. Then, once the foreground (with potential patches of background has been isolated) is found, the algorithm looks through the edges of the image, detecting any background color and starting the BFS from this pixel on the edge. This addition simply makes sure any background patches that touch the edge are filled in.


This algorithm still contains some downfalls.

  • Filling in parts of the foreground. Some of the patches from the previous, BFS-only algorithm filled in parts of foreground with too similar a color to the background because the kmeans algorithm clusters them to the same color. Then, the incorrect replacement of the pixels in the foreground is independent of the BFS traversal, but rather the oversimplification of the clustering. This probably means that some images need to be analyzed differently from the start.
  • This new algorithm has no way of detecting the patches of background that are not touching the edges. This issue, however, would require manual indication by a user of that patch for both kmeans and watershed.
  • Overall, with the goal in mind, aggregation will most likely require multiple methods to produce the optimal outcome

The website now allows for users to upload images and pick the point that contains their desired background color.

OCTOBER 19th, 2016.

Below are some of the results of the previous algorithm, kmeans clustering plus linear traversal of the pixels to determine the background pixels and the foreground image outline.

In addition to incomplete findings of the edges of the foreground, one of the issues with this algorithm was that I couldn't fill in any patches of background that wasn't reachable from the corners. I tried to apply a different method to find the secondary patches of background.

Below are some of the results of the new algorithm. It employs kmeans clustering, which seems to perform best with 5 clusters on most retail images, and BFS traversal on the pixels to determine where the background begins and ends.



Though the issue of isolated patches being filled in correctly was solved, now there is another complication of incorrectly filling in portions of the foreground with the new background color because highlights and such can be the same color as the original background. One way to potentially solve this issue is to increase the threshold for the number of pixels the BFS must collect for the patch of pixels to be considered "background," but increasing it too much causes the algorithm to miss some actual background loops.


This algorithm also performed best with relatively solid light backgrounds. Gradient backgrounds and darker backgrounds produced undesirable results.

Shadows also introduced complications.


Luckily, it seems that many ecommerce sites have plan images with relatively light, solid backgrounds in addition to any front-page image containing more complicated/distracting backgrounds. The algorithm performed a bit better and almost as expected on these images. Therefore, if the crawler only aggregates and applies this background replacement algorithm to the images that have simpler backgrounds, it should be able to perform better.


The next steps to improve the algorithm may be to apply Gaussian blur to the edges that are detected so as to minimize the appearance of pixelation when the contrast of the origin background and the new background color are large. Also, I need to find a way to minimize the filling in of patches in the foreground that are similar to those of the background. Important takeaways include recognizing that different images (the content/type of which is out of the control of any aggregating product) will probably require different methods of image segmentation. In the next iteration, I will attempt to combine these two algorithms or a different one and apply them to images conditionally to produce the best results for as many different types of images as possible.


As far as the web interface goes, I am continuing to build the site using Ruby on Rails and will integrate with AWS for the final product. I will have the C++ code fully integrated and usable via the web interface is the goal for the final product.

OCTOBER 21, 2016.

This week, I began working on the image segmentation functionality. I have a working C++ program for the kmeans algorithm. This takes the pixels of an image and clusters them so that similar colors are grouped. The number of clusters can be defined, but seems to level out at a relatively low number so there was no need for super high numbers. Currently it is set to 5. This aids in removing the background because the background is usually clustered to one or a couple colors/shades and so the program right now simply removes the pixels that are the color of background (which is determined by taking samples of the four corners of the image and picking the most likely color) and replacing it with a predtermined skin tone. Below are some examples of what the (imperfect) algorithm generates.

Currently the algorithm applies the k-means clustering algorithm on a given image. Then, taking the four corners of this clustered image, it determines which of the colors in the four corners is most likely to be the background color by determining the maximum number of corners that have their colors in common. Then the algorithm runs through each row replacing the color in the new image with the desired background color for every pixel that has a color matching that of the determined background color to remove in the clustered image. When the row hits a non background color, it instead fills it with the pixels from the original image.

The next steps for this algorithm will be finding a way to correctly remove the pockets of background that are created when a foreground creates an enclosed space that does not hit the left or right edge, therefore going undetected by this horizontally linear detection algorithm. This will ensure that the entire background is completely removed.

Another next step for this algorithm is making a non flat background replacement. Many of the existing backgrounds of retail images are a subtle/neutral color but are somewhat gradient with slightly darker edges. This adds more dimension to the image which will be helpful putting the product in real space, and in this case, making it look more like real skin.

The next steps are: continuing to work on this image segmentation algorithm, and perhaps combining methods (including watershed transformation) to possibly get a better image segmentation. I also need to get the facebook API integrated so that users can create accounts (and have a login screen) and flesh out account page details.

OCTOBER 9th, 2016.

This week I decided to build my website using Ruby on Rails, and began setting up the project and implementating the front end design comps I created through sketches and the app, Sketch. I will build interactive prototypes through inVision as I flesh out the details of the application and the specific functionality, but the skeleton for the main page has been set up so far. I also looked into ways of implementing the image manipulation algorithms I discussed and came to the conclusion that I will most likely be using OpenCV within my Ruby on Rails app.

I also further explored the different algorithms that are used in Computer Vision to perform image segmentation, the process of separating the foreground from the background. I came up with several algorithms that, for the types of images I will most likely be handling, will hopefully be effective in combination. These algorithms include the k-means, gaussian blur, and the watershed transformation. Though k-means and watershed are most commonly utilized to perform image segmentation, such as resulting in a lot of noise in the image or even over-segmentation, respectively, hopefully both of their downfalls can be minimized by using both in conjunction with one another.

Continuing on this note, since the alpha review is next week, there is a specific set of features I want to have implemented by then. I want to have the main framework for the site, with dummy images to create a visual representation of the site layout. This means that the web scraping functionality will not yet be present, but the interactivity and the various pages will be fleshed out and ready to dynamically add content to and the content displayed at this stage will not be real time content from the web. Additionally, I wish to have begun on the image editing function, perhaps having a function that manipulates some part of an image loaded manually, without the integration into the site at this time.

OCTOBER 3rd, 2016.

After this weeks meeting, I decided to pursue the idea of replacing an image's background with a skin tone color. Often, the color of clothing can greatly change the appearance of one's skin tone - making someone appear dull or vibrant depending on the color of fabric they are wearing. In short, some colors can be flattering and others not. This is the one of the advantages of getting to try on clothing in stores - there are no surprises. Though this tool wouldn't remove all the difficulties of shopping online (such as not knowing whether the garment will fit correctly) it could remove one of the challenges that shoppers may not even realize is something they should consider. This color will be determined from a user selected portion of skin in an image that a user can upload or take at the time it is requested. This color can be saved in the user's profile and therefore can be used across the site for any image and any product.

I then realized the consequence of replacing all images with the same skin tone color could have a negative effect on the variety of images on any given page, as well as probably be too dark as the background for all images for the for most skin tones. Therefore, I decided the best way to utilize this feature within a search and list/newsfeed format would be to give the user an option to view the image against the background of their own skin tone after they have already clicked on the image and are viewing the image details (probably on a separate page). That way, the user can 'try on' the color of the product against their own skin without the page being overwhelmed by the dark backgrounds.

Some resources that I found regarding removing backgrounds can be found on Polyvore and Lyst's engineering blogs found here and here respectively.

In terms of progress, as mentioned in my previous post, this week I will be working on finishing designing the site then beginning to develop the front end of the site. In addition, I hope to decide on an image segmentation algorithm I believe is most suitable for this purpose and begin its implementation. I hope to find one that suits the types of images I expect to gather amongst the ecommerce sites. Some useful sites and papers I have found can be seen here, here, here, and here. I will also need to construct a system diagram for the structure of the application to start implementing the backend functionality for the site.

OCTOBER 2nd, 2016.

This past week I have continued to do some research on the industry as well as analyze the data gathered from the user research to create a solid mapping of the projected product. Because I decided that the image editing and analyzing algorithm is going to be a significant portion of this project, I focused on researching what styles different sites had in common and what made browsing through them most satisfying. Some sites were better at the congregation of images than others. Below are some examples and analysis.

In general, a majority of the images for clothing seem to have models in a neutral colored background with the entire product in view, or with very small parts of it off the frame of the image. Some sites opted to remove the background and keep it white, which looks a bit unnatural and I find that most the visually appealing, less professional looking sites kept the neutral background.


Another observation is the framing of dresses - images on these sites tend to include either the entire model, usually with a bit of the foot or top of the head cropped out, or a zoomed in cropped version filling the frame with mostly just the product.


Tops were a bit more varied - many sites had interesting and unique crops and poses, sometimes the product taking up only around a fourth of the frame. Other times, the whole outfit is displayed which conveyes a cohesive look but also draws attention away from the main product and can be distracting.


Some sites had interesting functionality for their search and list for bottoms. Anthropologie, Need Supply Co, and Urban Outfitters all show an entire outfit at first, but on hover zooms to fill the frame with just the bottom or a detail on the pants/skirt.

Overall, I thought it was interesting to note which images I was most drawn to. I really enjoyed the neutral backgrounds and the inclusion of shadows where there is a sense of space that the model is in. I particularly disliked the images with no models and just the clothing against a white background and I also disliked the models against a white background, though not as much as a model-less image.

Cropping also gave the images an interesting twist and a more modern and engaging experience. It was also easier to scan through certain sections of images when I knew exactly what part of an outfit I was looking at. If my site is going to provide a feed, especially sometimes without being categorized as displaying just 'tops' or 'bottoms' users will have have their vision guided to the actual product.


I explored some of the image editing that other ecommerce aggregate sites may be employing to get a sense of what other people thought made for a great online shopping experience. Many of them opted to remove the background (or most of it) and place it against a uniformly white background to create cohesion within their site including images from multiple sources.



One thing that I noticed was that there is a lot of excess white space between the images. Something I found so satisfying about other native fashion sites was the great use of color in space, but these aggregating sites seem to allowing for a bit too much variance in size of images as well as removing the background entirely, resulting in much unused and unengaging space on the page.


All in all, though these aggregator sites are attempting to create a cohesive look for their site, by removing the background it makes their site look bare and uninviting due to the excess of white space and awkard image size differences. Though the actual image content of the each variant site is out of my control, editing the images to look coherent without removing the background will be my approach.

I believe the most satisfying part of browsing online through shops and fashion sites is the imagery. Sometimes, as I've discovered through research, people are on the hunt, but sometimes people just shop because they are bored or they find pleasure in flipping through images (think beautiful magazines, except on the web) to pass time or get inspiration. I want to translate that experience through a news feed type platflorm, so keeping the image viewing experience as beautiful as possible is extremely important.

In order to keep the images cohesive and beautiful, I have come up with the following approach:

  1. Remove a distracting background (if there is one) and insert a neutral background (much like the ones that so many ecommerce images have) to replace it.
  2. If an image already has a neutral background, fit and crop the image based on the content the listing should highlight.
  3. Always use an image available for a product with a model. If there is no such picture, crop and rotate the model-less image in an interesting manner and insert a neutral background as well.
  4. Lastly, apply a filter to the image without altering the integrity of the color of the product.
  5. I am also exploring methods of adding imitation shadows to create that sense of familiar and relatable space inside an image once a background is replaced. More on this technique to be discussed later.

These steps I hope will not only make for a more cohesive look via the common filters and relative focuses of the images, but also create an engaging collection of images that will capture the users imagination and attention whether the experience was meant for pleasure or for a purpose.

In terms of the functionality of the site, I also created an information heirarchy graph in the form of a user flow diagram, outlining the functionality of the website. Understanding the interaction of a user with the site helped outline the designs needed before beginning to develop the site.

I then sketched designs of the site simply using pen and paper. I began working on the wireframing of the main pages and plan to have the full mockups and interactive prototype by next week. I also hope to have a couple of the main pages developed, if possible.

For next week, I will have high fidelity mockups of the site created in Sketch, and an interactive prototype created in inVision to demonstrate the behavior of the product. Once I have these finalized, I will begin working on actually developing the front-end of the site as well.

SEPTEMBER 25th, 2016.

So far, I have conducted user research in the form of surveys and questionaires as well as informal interviews regarding related topics such as online shopping in general and similar websites with various approaches to a similar problem. I asked many people I know including family and friends to get a sense of different online shopping behavior and needs.

I am looking for ways to differentiate my product from those that are already out there, and I believe my approach should focus on the image editing algorithms.

This following week I hope to gain more insight into the field and the limitations of the ecommerce experience. Some sites I have come across as having similar ideas is,, and

SEPTEMBER 18th, 2016.

This week I completed a plan for the timeline of the project. I will have 12 weeks to complete this project, as the final review is on the 12th of December. I completed the design document outlining the steps and implementation approaches for the final product as well as research into existing resources and similar approaches.

This following week, I plan on gaining an even deeper understanding of the online fashion industry and ecommerce world to better understand the field. I will do this via user research and testing, further online research and reaching out to resources that I received.