Amazon patents a mirror that dresses you in virtual clothes

Have you ever fretted over buying a suit or dress online for a wedding or another flashy event, wondering how it would look on your frame or if it would even fit? That might not be a problem soon now that Amazon has patented a blended-reality mirror that lets you try on clothes virtually while placing you into a virtual location (via GeekWire).

The patent describes the mirror as partially-reflective and partially-transmissive, and uses a mix of displays, cameras, and projectors to create the blended image. The imagined mirror works by scanning the environment to generate a virtual model, and then identifies the face and eyes of the user to determine which objects are to be seen as a reflection. Once this process is completed, the virtual clothes and scene are transmitted through the mirror to create the blended-reality result.

Not all patented ideas turn into products, of course. But Amazon acquired Body Labs last year, an AI-software and computer vision company that once touted its ability to create 3D models of human bodies in motion and then dress them in virtual outfits. You can also draw a line of progression between the blended-reality mirror and Amazon’s Echo Look, a hands-free video camera that takes floor-length photos of you to provide style advice and fashion recommendations.

Amazon has increasingly eyed a place in the fashion industry, ambitiously building up businesses in the sector, including launching its own clothing lines and developing algorithms that design clothing based on Instagram fashion trends. All considered, an Amazon-branded mirror that helps you get dressed isn’t as bizarre as it might have once seemed.


Hey @Cisco, perhaps you might want to give your lawyers a ring?

Something a bit more basic (but quite realistic) than this was done with the help of the Kinect a few years back, and I bet there are more of these concepts out there.

While the patent itself isn’t anything new or that interesting, the conclusion of the article rises a few more intriguing questions. I think cameras are finally getting good enough, where everyone has one on the go to fully start to utilize these kinds of services more organically.

While today’s standard is pictures and perhaps videos of the dress or item, the next step is clearly to upload your own 3D self to replace the model and get a more accurate presentation of how it would look on you.

Oh I’m not doubting the tech, I’ve worked with a number of these implementations. From Cisco and others.

It’s more that I’m surprised that Amazon thinks they can claim patent on a technology path that has been in development for over a decade and where numerous other companies have prior patents and proof of prior art. Amazon and Body Labs don’t seem to have invented anything new here that should be patentable.

A few thoughts on this…

- What you listed is a patent application vs. granted patent for Cisco. There are other various virtual dressing room technology patents already granted. For instance, Zugara has multiple virtual dressing room patents going back to 2009 –

- While companies can try and patent whatever they like, the real world scenarios are often not solvable at a given time of a patent grant. As an example, there are alot of concept videos that show 3D virtual dressing room technology that are either special effects or using very controlled environments. One of the dirty secrets of these videos are how they always show a thin, female model trying on a virtual garment and it’s usually a sleeveless garment. This is because devices like Kinect cannot distinguish a body type and where someone’s waste line would be. So though a controlled video might work for one body type, it won’t work for others (i.e. the virtual garment won’t expand at the waste; though technically possible not feasible from a cost perspective). Second, when you use garments with long sleeves, the elbow bend can never match up correctly with the real-world elbow bend which hurts the illusion. It’s why you almost always see sleeveless tops in demos.

- Finally, regardless of this recent patent or other patents in the virtual dressing room space, the largest obstacle is cost of 3D assets. Unlike other verticals, retail often does not have CAD models created for apparel. This means that almost all 3D assets have to be created from scratch. This drives up the cost of virtual dressing room simulations which don’t then offer the cost benefit given they won’t solve exact fit anyway. We did a blog post on this subject awhile back – There is a much different cost structure from applying ‘video game’ type virtual models to an animated skeleton vs. applying a virtual model in real-time to a person’s actual body and movement.

I had this in a book in 2001!

I wish they could make a virtual mannequin with my detailed measurements scanned so I could match different brands sizes when shopping for clothes

fitting clothes virtually in AR is easy but it wouldn’t tell anything about how good the fitting could be in reality

What we really need is for clothes makers to go away from the …/S/M/L/XL/… stupid chart, and start giving us useful information like we already do with pants. Like size under the arms, length of the sleeves, these kind of things.
I have clothes ranging from M to XL for christ’s sake…

Sounds expensive. Which means it would be most practical in a retail setting, not in your bedroom. Maybe Amazon really is eyeing up Target or the like?

I swear I saw a commercial for this over the weekend. Maybe that was just another company with a "look what we hope to do this year" thing.

Step 1. Take off your clothes
Step 2. Wait for your naked body to be uploaded to Amazon servers

View All Comments
Back to top ↑