Taking too long? Close loading screen.
Connect with us

Tech

Headroom, which uses AI to supercharge videoconferencing, raises $5M

Published

on

Videoconferencing has become a cornerstone of how many of us work these days — so much so that one leading service, Zoom, has graduated into verb status because of how much it’s getting used.

But does that mean videoconferencing works as well as it should? Today, a new startup called Headroom is coming out of stealth, tapping into a battery of AI tools — computer vision, natural language processing and more — on the belief that the answer to that question is a clear — no bad WiFi interruption here — “no.”

Headroom not only hosts videoconferences, but then provides transcripts, summaries with highlights, gesture recognition, optimised video quality, and more, and today it’s announcing that it has raised a seed round of $5 million as it gears up to launch its freemium service into the world.

You can sign up to the waitlist to pilot it, and get other updates here.

The funding is coming from Anna Patterson of Gradient Ventures (Google’s AI venture fund); Evan Nisselson of LDV Capital (a specialist VC backing companies buidling visual technologies); Yahoo founder Jerry Yang, now of AME Cloud Ventures; Ash Patel of Morado Ventures; Anthony Goldbloom, the cofounder and CEO of Kaggle.com; and Serge Belongie, Cornell Tech associate dean and Professor of Computer Vision and Machine Learning.

It’s an interesting group of backers, but that might be because the founders themselves have a pretty illustrious background with years of experience using some of the most cutting-edge visual technologies to build other consumer and enterprise services.

Julian Green — a British transplant — was most recently at Google, where he ran the company’s computer vision products, including the Cloud Vision API that was launched under his watch. He came to Google by way of its acquisition of his previous startup Jetpac, which used deep learning and other AI tools to analyze photos to make travel recommendations. In a previous life, he was one of the co-founders of Houzz, another kind of platform that hinges on visual interactivity.

Russian-born Andrew Rabinovich, meanwhile, spent the last five years at Magic Leap, where he was the head of AI, and before that, the director of deep learning and the head of engineering. Before that, he too was at Google, as a software engineer specializing in computer vision and machine learning.

You might think that leaving their jobs to build an improved videoconferencing service was an opportunistic move, given the huge surge of use that the medium has had this year. Green, however, tells me that they came up with the idea and started building it at the end of 2019, when the term “Covid-19” didn’t even exist.

“But it certainly has made this a more interesting area,” he quipped, adding that it did make raising money significantly easier, too. (The round closed in July, he said.)

Given that Magic Leap had long been in limbo — AR and VR have proven to be incredibly tough to build businesses around, especially in the short- to medium-term, even for a startup with hundreds of millions of dollars in VC backing — and could have probably used some more interesting ideas to pivot to; and that Google is Google, with everything tech having an endpoint in Mountain View, it’s also curious that the pair decided to strike out on their own to build Headroom rather than pitch building the tech at their respective previous employers.

Green said the reasons were two-fold. The first has to do with the efficiency of building something when you are small. “I enjoy moving at startup speed,” he said.

And the second has to do with the challenges of building things on legacy platforms versus fresh, from the ground up.

“Google can do anything it wants,” he replied when I asked why he didn’t think of bringing these ideas to the team working on Meet (or Hangouts if you’re a non-business user). “But to run real-time AI on video conferencing, you need to build for that from the start. We started with that assumption,” he said.

All the same, the reasons why Headroom are interesting are also likely going to be the ones that will pose big challenges for it. The new ubiquity (and our present lives working at home) might make us more open to using video calling, but for better or worse, we’re all also now pretty used to what we already use. And for many companies, they’ve now paid up as premium users to one service or another, so they may be reluctant to try out new and less-tested platforms.

But as we’ve seen in tech so many times, sometimes it pays to be a late mover, and the early movers are not always the winners.

The first iteration of Headroom will include features that will automatically take transcripts of the whole conversation, with the ability to use the video replay to edit the transcript if something has gone awry; offer a summary of the key points that are made during the call; and identify gestures to help shift the conversation.

And Green tells me that they are already also working on features that will be added into future iterations. When the videoconference uses supplementary presentation materials, those can also be processed by the engine for highlights and transcription too.

And another feature will optimize the pixels that you see for much better video quality, which should come in especially handy when you or the person/people you are talking to are on poor connections.

“You can understand where and what the pixels are in a video conference and send the right ones,” he explained. “Most of what you see of me and my background is not changing, so those don’t need to be sent all the time.”

All of this taps into some of the more interesting aspects of sophisticated computer vision and natural language algorithms. Creating a summary, for example, relies on technology that is able to suss out not just what you are saying, but what are the most important parts of what you or someone else is saying.

And if you’ve ever been on a videocall and found it hard to make it clear you’ve wanted to say something, without straight-out interrupting the speaker, you’ll understand why gestures might be very useful.

But they can also come in handy if a speaker wants to know if he or she is losing the attention of the audience: the same tech that Headroom is using to detect gestures for people keen to speak up can also be used to detect when they are getting bored or annoyed and pass that information on to the person doing the talking.

“It’s about helping with EQ,” he said, with what I’m sure was a little bit of his tongue in his cheek, but then again we were on a Google Meet, and I may have misread that.

And that brings us to why Headroom is tapping into an interesting opportunity. At their best, when they work, tools like these not only supercharge videoconferences, but they have the potential to solve some of the problems you may have come up against in face-to-face meetings, too. Building software that actually might be better than the “real thing” is one way of making sure that it can have staying power beyond the demands of our current circumstances (which hopefully won’t be permanent circumstances).

Source

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

How to Create Custom Emoji Mashups on Android

Published

on

There are so many emojis at your fingertips nowadays, they’ve practically become their own language. Even still, sometimes you’ll find that there isn’t one that quite fits what you’re trying to say. The solution? Mash up two to create your own.

Advertisement

All you need to do is get your hands on the Gboard keyboard Android, which allows you to mix two different emojis to create new ones via its “Emoji Kitchen” feature. (iOS users will have to use a website to create their mashups.)

Illustration for article titled How to Create Custom Emoji Mashups on Android

Screenshot: Brendan Hesse

Advertisement

The Emoji Kitchen launched back in Februrary, but now that it’s been out for a while, we checked back in to see what sorts of mad scientist-like creations you can cook up.

For example, you can add a head-exploding effect to the yawning emoji to add even more emphasis when your friend shares a secret, or you can get extra creepy Halloween vibes by mashing a skull together with the jack-o-lantern face.

However, know that your powers of combination aren’t limitless. The Emoji Kitchen only supports certain emojis, and most of them just add faces to inanimate objects or change an emoji’s expression. You also can’t directly edit the final product—Gboard does the editing and gives you a selection of mashups to choose from.

Every face emoji seem to work, plus a handful or often-used inanimate objects like hearts, but you’ll see a ghostly “Nothing to see here” animation if you can’t combine your selected emoji with another.

Advertisement

How to create new emoji in Gboard’s “Emoji Kitchen”

Even if you can’t use certain emojis or directly edit the end results yourself, the hybrid emoji stickers Gboard spits out are pretty good, and it’s a neat way to personalize your messages without using an extra app or third-party website.

Advertisement

  1. Open an app with text input, and then open Gboard’s emoji section. (Note: Gboard needs to be your default keyboard app).
  2. Tap on an emoji. Make sure you test out a few of the emojis to see which ones are supported. If you do not see any combination suggestions, that means the app you’re using doesn’t support the Emoji Kitchen feature.
  3. If the emoji can be customized or combined with another, Gboard will offer up some suggestions in a menu above the keyboard. This might take a moment to show up.
  4. Slide through the suggested combos and select the new emoji to insert it into your message.

Advertisement

Update 10/22/20: We originally published this story in February of 2020. We have updated it in October 2020 with new examples (and a little help for iOS users who are missing out on all the fun).

Source

Continue Reading

Tech

Give Your Car a Spark of Life With an Anker Roav Jump Starter

Published

on

Best Home DealsBest Home DealsThe best home, kitchen, smart home, and automotive deals from around the web, updated daily.

Anker Roav Jump Starter | $66 | Amazon

Because of a price drop, you can get your hands on an Anker Roav Jump Starter for a low $66. It’s 12V and can recharge gas engines up to 6.0L and diesel engines up to 4.0L. What are you waiting for? It’ll get you out of a pinch, and when you don’t need to get your car running, you can use it as a charger for your phones and chargers. Grab it before it’s gone.

Advertisement


Source

Continue Reading

Tech

Tesla’s ‘Full Self-Driving’ beta is here, and it looks scary as hell

Published

on

This week, Tesla began pushing its “Full Self-Driving” (FSD) update to a select group of customers, and the first reactions are now beginning to roll in. The software, which enables drivers to use many of Autopilot’s advanced driver-assist features on local, non-highway streets, is still in beta. As such, it requires constant monitoring while in operation. Or as Tesla warns in its introductory language, “it may do the wrong thing at the worse time.”

Frankly, this looks terrifying — not because it seems erratic or malfunctioning, but because of the way it will inevitably be misused.

Early reactions to the software update range from “that was a little scary” to full-throated enthusiasm for CEO Elon Musk’s willingness to let his customers beta-test features that aren’t ready for wide release. This willingness has helped Tesla maintain its market leader position at the forefront of electric and autonomous vehicle technology, but it also presents a huge risk to the company, especially if those early tests go wrong.

[embedded content]

A Tesla owner who goes by the handle “Tesla Raj” posted a 10-minute video on Thursday that purports to show his experience with FSD. He says he used the feature while driving down “a residential street… with no lane markers” — a function that Tesla’s Autopilot previously was unable to do.

Right off the bat, there are stark differences in how FSD is presented to the driver. The visuals displayed on the instrument cluster look more like training footage from an autonomous vehicle, with transparent orange boxes outlining parked cars and other vehicles on the road and icons that represent road signs. The car’s path is depicted as blue dots stretching out in front of the vehicle. And various messages pop up that tell the driver what the car is going to do, such as “stopping for traffic control in 75 ft.”

The car also made several left- and right-hand turns on its own, which Raj described as “kind of scary, because we’re not used to that.” He also said the turns were “human like” in so far as the vehicle inched out into the opposite lane of traffic to assert itself before making the turn.

Another Tesla owner who lives in Sacramento, California, and tweets under the handle @brandonee916 posted a series of short videos that claim to show a Tesla vehicle using FSD to navigate a host of tricky driving scenarios, including intersections and a roundabout. These videos were first reported by Electrek.

The vehicles in both Tesla Raj and @brandonee916’s tests are driving at moderate speeds, between 25 and 35 mph, which has been very challenging for Tesla. Musk said Tesla Autopilot can handle high-speed driving with its Navigate on Autopilot feature and low speeds with its Smart Summon parking feature. (How well Smart Summon works is up for debate, given the number of Tesla owners reporting bugs in the system.) The company has yet to allow its customers hands-off driving on highways, like Cadillac with its Autopilot competitor Super Cruise. But these medium speeds, where the vehicle is more likely to encounter traffic signals, intersections, and other complexities, is where Tesla has encountered a lot of difficulties.

For now, FSD is only available to Tesla owners in the company’s early access beta-testing program, but Musk has said he expects a “wide release” before the end of 2020. The risk, obviously, is that Tesla’s customers will ignore the company’s warnings and misuse FSD to record themselves performing dangerous stunts — much like they have done for years and continue to do on a regular basis. This type of rule-breaking is to be expected, especially in a society where clout-chasing has become a way of life for many people.

Tesla has said Autopilot should only be used by attentive drivers with both hands on the wheel. But the feature is designed to assist a driver, and it’s not foolproof: there have been several high-profile incidents in which some drivers have engaged Autopilot, crashed, and died.

“Public road testing is a serious responsibility and using untrained consumers to validate beta-level software on public roads is dangerous and inconsistent with existing guidance and industry norms,” said Ed Niedermeyer, communications director for Partners for Automated Vehicle Education, a group that includes nonprofits and AV operators like Waymo, Argo, Cruise, and Zoox. “Moreover, it is extremely important to clarify the line between driver assistance and autonomy. Systems requiring human driver oversight are not self-driving and should not be called self-driving.”

Source

Continue Reading

Trending