Google patents motorized Pixelbook display that can adjust the screen for you

As pieces of technology hardware, laptop hinges really haven’t dramatically changed in a while: you lift up your screen, your computer turns on, and you go about your business. But Google is imagining a world in which things are more automated, as seen in this new patent spotted by Patently Mobile for a motorized Pixelbook screen that could automatically lift up your display when you tap the lid.

Sure, we’ve had detachable screens, displays that can flip around, and some weird-looking alternatives, like the Surface Book’s chunky design or Lenovo’s Yoga line with their watchband-style links. But Google’s patent — officially for a “Notebook computer with motorized display positioning” — speaks to a magical world where laptop users are freed from the tyranny of having to engage in physical effort to use their computers.

Image: Google

The patent also speculates further about how the technology could be used, too. A Windows Hello-style facial recognition system could automatically verify and unlock a user, and continually adjust the screen during use to comfortably angle toward your face. Then, if the computer recognizes that you aren’t sitting in front of it anymore, it could automatically close and lock itself to prevent anyone else from using it. (No more “hacked” Facebook messages from your trolling friends.)

Of course, this is all just a patent that Google filed back in 2013, so it’s always possible that this will never make its way into a real product. But with the resurgence of automatic unlocking systems like the iPhone X’s Face ID, perhaps a motorized laptop lid that can recognize users or lock them out isn’t that crazy of an idea.

Comments

What I want that would be as or more useful is an eye tracker that senses which display (laptop, external monitor etc) you are looking at and makes the window or browser you are looking at active without having to select it first with your mouse, finger or stylus.

Also imagine scrolling/clicking/typing/other interactions via eye tracking and facial recognition alone.

The world is debating the future of computing, whether tablets or laptops are the future. I’d say neither: both keyboards and touchscreens suck.

No more smearing my finger across glass, or clicking away on plastic blocks; Give me a future where I can just look at a screen and it does what I want it to, please!

You just gotta get a little implant behind your eyes and a battery pack behind your ear and you’re good to go

There are people working on this… but I think there is definitely going to be a bit of a disappointment once eye-tracking becomes ubiquitous.

Even if the eye-tracking was lag-free and perfect, eye gaze simply doesn’t deliver that much information to the computer. Eye-tracking alone could never distinguish between what I want to interact with and what I simply want to look at.

There’s also only limited types of input you can do – look at thing, and look at thing longer. Typing with eye-gaze alone is a pretty slow affair for exactly this reason. Compare it with a mouse cursor – you have to manually move it yourself, but you can left-click, right-click, middle-click, scroll, and many programs and websites have certain interactions just from hovering your cursor (see alt-text on images, or menus that expand on mouse-over).

I’m not saying there isn’t a place for eye-tracking – only that there will need to be a really considered strategy for how to use that input in a cohesive and intuitive way.

Give me a future where I can just look at a screen and it does what I want it to, please!

This would require a computer that can read your brain directly, for all intents and purposes. I’m sure it’s technically possible – but even then you need to distinguish between things you think you want to do and things you actually want to do.

If anything, I think computing is going to incorporate as many input types as it can cohesively put together. We’ll probably always have keyboards, mice, and touchscreens – they can all work in unison without stepping on one another. Any future input methods are more likely than not just going to add to the list of options.

This would require a computer that can read your brain directly, for all intents and purposes. I’m sure it’s technically possible – but even then you need to distinguish between things you think you want to do and things you actually want to do.

Sounds like a problem to us now, but I trust someone will figure it out eventually. I would love to be able to just think and poof there’s an image of my thought on screen. So much more efficient!

Just before I die in about 65 years, that’s all I ask.

I’m not sure. Some of the things that pop into my weird mind… wouldn’t always want instant images lol.

Don’t get me started on my own weird mind!

But I’m assuming they could figure out how to make the distinction between something that just pops up in your mind, and something that you really want to put on screen.

Facebook is already working on something like that:

https://techcrunch.com/2017/04/19/facebook-brain-interface/

Even if the eye-tracking was lag-free and perfect, eye gaze simply doesn’t deliver that much information to the computer. Eye-tracking alone could never distinguish between what I want to interact with and what I simply want to look at.

There’s also only limited types of input you can do – look at thing, and look at thing longer.

That’s not entirely true. You’re limiting yourself, thinking of the gaze exclusively as an intentioned input interface. Yes, the amount of things you can convey with a glance is limited. But there’s a lot of information that a suitably advanced software system could glean, by watching exactly where you look on the screen anyway — not just at the present moment, but also the history leading up to it.

Imagine an eye-tracking system which was advanced and precise enough to watch as you take in information, and then build an accurate mapping of the parts you’ve examined vs. what you haven’t. While it’s true that "just looking somewhere" doesn’t reveal all that much, looking at lots of somewheres in sequence becomes pretty unambiguous. It’s easy to distinguish actually reading text from merely looking at it, by watching your eye track from word to word, line to line, in sequence.

So imagine a browser that could actually watch you read the information displayed on a web page. There are all kinds of ways it could use that information to enhance the user experience.

Manual scrolling becomes a relic of ancient history. Why should you need to manually reposition the text, when the software notices every time you reach the end of a page or paragraph and automatically positions the next one for reading? (Intelligent text-tracking would take a bit of getting used to, so that the automatic actions were expected and helpful rather than surprising and confusing, but for at least some people it would quickly become far more convenient than tedious manual scrolling.)

Ditto the resumption of interrupted sessions. Already, web browsers store the scroll position of the page as part of the session history, so that when you follow a link and then hit Back to return to the previous page, you end up back at the same place you were before. Now, imagine software that could actually keep your place in the page content, because it knows what you’ve read and what you haven’t. And imagine that ability extended to all browsing, even across devices and sessions. Get halfway through an article on your tablet at home before you have to leave for work, and decide to finish it on the train? Bring up that page from your recent history, and it loads up right at the point you left off, even though the formatting is completely different on your phone’s smaller screen. Don’t remember where you left off? Ask the browser to highlight the last thing you read before stopping.

Or, on the technical side, browser prefetching could be limited to, or at least prioritize, the data for links that we actually look at. Why prefetch page content for a linked page we haven’t even glanced at? We’re not going to follow any link
that we don’t at least look at it first.

The stuff I’ve described just scratches the surface, people will no doubt come up with endless other ideas for ways to apply eye-tracking information. Most of them will be uninteresting, or awkward, and won’t catch on. But my point is that there’s a lot more software can do to enhance our human-computer interactions, if we don’t limit ourselves (and limit those interactions) to "how do I tell the computer _____?" As Uncle Lincoln was saying in his original comment, ideally we shouldn’t have to "tell" the computer every single thing we want. An advanced enough system can be far smarter than that.

View All Comments
Back to top ↑