Amazon makes Alexa Routines shareable

Amazon is making it easier for Alexa device owners to use Routines. The feature, which has been around for years, allows Alexa users to combine multiple tasks into a single voice command of their choosing. For example, you could make a routine that turns off your lights, plays relaxing music and locks your doors when you say, “Alexa, goodnight.” A morning routine could read you the headlines and weather forecast, as well as turn on your connected coffee maker. Now, Amazon will allow users to share their favorite routines with others.

TC Sessions: Robotics + AI at UC Berkeley on April 18, 2019.

Boston Dynamics delivers plan for logistics robots as early as next year

Boston Dynamics is just months away from announcing their approach to logistics, the first real vertical it aims to enter, after proving their ability to build robots at scale with the quadrupedal Spot. The company’s new CEO, Robert Playter, sees the company coming into its own after decades of experimentation.

Playter, interviewed on the virtual main stage of Disrupt 2020, only recently ascended from COO to that role after many years of working there, after longtime CEO and founder Marc Raibert stepped aside to focus on R&D. This is Playter’s first public speaking engagement since taking on the new responsibility, and it’s clear he has big plans for Boston Robotics.

The recent commercialization of Spot, the versatile quadrupedal robot that is a distant descendant of the famous Big Dog, showed Playter and the company that there is a huge demand for what they’re offering, even if they’re not completely sure where that demand is.

“We weren’t sure exactly what the target verticals would be,” he admitted, and seemingly neither did the customers, who have collectively bought about 260 of the $75,000 robots and are now actively building their own add-ons and industry-specific tools for the platform. And the price hasn’t been a deterrent, he said: “As an industrial tool this is actually quite affordable. But we’ve been very aggressive, spending a lot of money to try to build an affordable way to produce this, and we’re already working on ways to continue to reduce costs.”

Image Credits: TechCrunch

The global pandemic has also helped create a sense of urgency around robots as an alternative to or augmentation of manual labor.

“People are realizing that having a physical proxy for themselves, to be able to be present remotely, might be more important than we imagined before,” Playter said. “We’ve always thought of robots as being able to go into dangerous places, but now danger has been redefined a little bit because of COVID. The pandemic is accelerating the sense of urgency and, I think, probably opening up the kinds of applications that we will explore with this technology.”

Among the COVID-specific applications, the company has fielded requests for collaboration on remote monitoring of patients, and automatic disinfection using Spot to carry aerosol spray through a facility. “I don’t know whether that’ll be a big market going forward, but we thought it was important to respond at the time,” he said. “Partly out of a sense of obligation to the community and society that we do the right thing here.”

The “Dr Spot” remote vitals measurement program at MIT.

One of the earliest applications to scale successfully was, of course, logistics, where companies like Amazon have embraced robotics as a way to increase productivity and lower labor costs. Boston Dynamics is poised to jump into the market with a very different robot — or rather robots — meant to help move boxes and other box-like items around in a very different way from the currently practical “autonomous pallet” method.

“We have big plans in logistics,” Playter said. “we’re going to have some exciting new logistics products coming out in the next two years. We have customers now doing proof of concept tests. We’ll announce something in 2021, exactly what we’re doing, and we’ll have product available in 2022.”

The company already offers Pick, a more traditional, stationary item-picking system, and they’re working on the next version of Handle, a birdlike mobile robot that can grab boxes and move them around while taking up comparatively little space — no more than a person or two standing up. This mobility allows it to unload things like shipping containers, trucks and other confined or less predictable spaces.


In a video shown during the interview (which you can watch above), Handle is also shown working in concert with an off-the-shelf pallet robot, and Playter emphasized the need for this kind of cooperation, and not just between robots from a single creator.

“We’ll be offering software that lets robots work together,” he said. “Now, we don’t have to create them all. But ultimately it will take teams of robots to do some of these tasks, and we anticipate being able to work with a heterogeneous fleet.”

This kinder, gentler, more industry-friendly Boston Dynamics is almost certainly a product of nudging from SoftBank, which acquired the company in 2018, but also the simple reality that you can’t run a world-leading robotics R&D outfit for nothing. But Playter was keen to note that the Japanese tech giant understands that “we’re only in the position we’re in now because of the previous work we’ve done in the last two decades, developing these advanced capabilities, so we have to keep doing that.”

One thing you won’t likely see doing real work any time soon is Atlas, the company’s astonishingly agile humanoid robot. It’s just not practical for anything just yet, but instead acts as a kind of prestige project, forcing the company to constantly adjust its sights upward.

“It’s such a complex robot, and it can do so much it forces us to create tools we would not otherwise. And people love it — it’s aspirational, it attracts talent,” said Playter.

And he himself is no exception. Once a gymnast, he recalled “a nostalgic moment” watching Atlas vault around. “A lot of the people in the company, including Marc, have inspiration from the athletic performance of people and animals,” Playter said. “That DNA is deeply embedded in our company.”

iOS 14 is now available to download

Apple has just released the final version of iOS 14, the next major version of the operating system for the iPhone. It is a free download and it works with the iPhone 6s or later, both generations of iPhone SE and the most recent iPod touch model. If your device runs iOS 13, it supports iOS 14. The update may or may not be immediately available, but keep checking because people are now receiving the update.

JAWS architect Glen Gordon is joining Sight Tech Global, a virtual event Dec. 2-3

For people who are blind or visually impaired, JAWS is synonymous with freedom to operate Windows PCs with a remarkable degree of control and precision with output in speech and Braille. The keyboard-driven application makes it possible to navigate GUI-based interfaces of web sites and Windows programs. Anyone who has ever listened to someone proficient in JAWS (the acronym for “Job Access With Speech”) navigate a PC can’t help but marvel at the speed of the operator and the rapid fire machine-voice responses from JAWS itself.

Microsoft’s Project Natick underwater data center experiment confirms viability of seafloor data storage

Microsoft has concluded a years-long experiment involving use of a shipping container-sized underwater data center, placed on the sea floor off the cost of Scotland’s Orkney Islands. The company pulled its “Project Natick” underwater data warehouse up out of the water earlier this year (at the beginning of the summer) and spent the last few months studying the data center, and the air it contained, to determine the model’s viability.

self-charging-n95

N95 masks could soon be rechargeable instead of disposable

The pandemic has led to N95 masks quickly becoming one of the world’s most sought-after resources as essential workers burned through billions of them. New research could lead to an N95 that you can recharge rather than throw away — or even one that continuously tops itself up for maximum effectiveness.

AT&T customers can now make and receive calls via Alexa

Amazon this morning announced it’s teaming up with AT&T on a new feature that will allow some AT&T customers to make and receive phone calls through their Alexa-enabled devices, like an Amazon Echo smart speaker. Once enabled, customers with supported devices will be able to speak to the Alexa digital assistant to start a phone call or answer an incoming call, even if their phone is out of reach, turned off or out of battery.

The feature, “AT&T calling with Alexa,” has to first be set up under the user’s Alexa account.

To do so, users who want to enable the option will need to go to the “Communication” section in their Alexa app’s Settings. From there, you’ll select “AT&T” and then follow the on-screen instructions to link your mobile number.

Once linked, AT&T customers will be able to say things like “Alexa, call Jessica,” or “Alexa, dial XXX-XXX-XXXX” (where the Xes represent someone’s phone number).

When a call is coming in, Alexa will announce the call by saying, “Incoming call from James,” or whomever is ringing you. You can respond, “Alexa, answer,” to pick up, then speak to the caller via your Alexa device.

There are a few different ways to control when you want to receive incoming calls.

You can create an Alexa Routine that specifies you’ll only receive your calls through Alexa during workday hours of 9 a.m. to 5 p.m., for example. You could also make a routine that allowed you to disable AT&T calls on your device when you said a trigger phrase, like “Alexa, I’m leaving home.” Plus, you can manually turn off the feature when you’re leaving the house by switching on the “Away Mode” setting in the Alexa app.

The new feature is made possible by AT&T’s NumberSync service that allows users to make and receive phone calls on smartwatches, tablets, computers and, now, Alexa devices. There’s no cost associated with using the feature, which is included with all eligible AT&T mobile plans.

Amazon says AT&T Calling with Alexa is available on post-paid plans for those customers who have a compatible HD-voice mobile phone, like an iPhone or Samsung Galaxy device, among many others.

While only AT&T customers in the U.S. can take advantage of the feature, they’re able to place outgoing calls to numbers across Mexico, Canada and the U.K., as well as the U.S.

Amazon declined to say if it plans to offer a similar feature to customers with other carriers, but says it will respond to user feedback to evolve the feature over time.

This is not the first feature designed to make Alexa devices a tool for communication.

Amazon has already tried to make its Alexa devices work like a cross between a home intercom and a phone. With features like Drop-In, users can check in on family members in other parts of the home. Or they could use Announcements to broadcast messages, like “Dinner’s ready!” Meanwhile, calling features like Alexa-to-Alexa Calling or Alexa Outbound Calling have allowed users to make free phone calls to both other Alexa users and most mobile and landline numbers in the U.S., U.K., Canada and Mexico through Alexa devices or the Alexa app.

However, these features didn’t support incoming calls or calls to emergency services, like 911, so they weren’t full phone replacements.

Arguably, it may also be hard to get users to change their habit of using their cell phone in favor of an Alexa device, given that many people tend to keep phones nearby at all times, even when at home.

By offering a way to tie an Alexa device to a real phone number, however, users may be more inclined to try calling through Alexa.

The feature could also benefit the elderly, who couldn’t get to their phone in time, in the event of an emergency, or those with other special needs or disabilities that make walking over to a cell phone to answer a call more difficult.

Unfortunately, there’s still a major roadblock to using this service: spam calls. So many calls today are unwanted robocalls and spam. Having them announced over Alexa could become more of an annoyance than a help, unless users already subscribe to an advanced call blocker service.

Amazon says the new feature is live today across the U.S.

vocalpitchchangeswithage

Voice assistants don’t work for kids: The problem with speech recognition in the classroom

Before the pandemic, more than 40% of new internet users were children. Estimates now suggest that children’s screen time has surged by 60% or more with children 12 and under spending upward of five hours per day on screens (with all of the associated benefits and perils).

Although it’s easy to marvel at the technological prowess of digital natives, educators (and parents) are painfully aware that young “remote learners” often struggle to navigate the keyboards, menus and interfaces required to make good on the promise of education technology.

Against that backdrop, voice-enabled digital assistants hold out hope of a more frictionless interaction with technology. But while kids are fond of asking Alexa or Siri to beatbox, tell jokes or make animal sounds, parents and teachers know that these systems have trouble comprehending their youngest users once they deviate from predictable requests.

The challenge stems from the fact that the speech recognition software that powers popular voice assistants like Alexa, Siri and Google was never designed for use with children, whose voices, language and behavior are far more complex than that of adults.

It is not just that kid’s voices are squeakier, their vocal tracts are thinner and shorter, their vocal folds smaller and their larynx has not yet fully developed. This results in very different speech patterns than that of an older child or an adult.

From the graphic below it is easy to see that simply changing the pitch of adult voices used to train speech recognition fails to reproduce the complexity of information required to comprehend a child’s speech. Children’s language structures and patterns vary greatly. They make leaps in syntax, pronunciation and grammar that need to be taken into account by the natural language processing component of speech recognition systems. That complexity is compounded by interspeaker variability among children at a wide range of different developmental stages that need not be accounted for with adult speech.

Changing the pitch of adult voices used to train speech recognition fails to reproduce the complexity of information required to comprehend a child’s speech. Image Credits: SoapBox Labs

A child’s speech behavior is not just more variable than adults, it is wildly erratic. Children over-enunciate words, elongate certain syllables, punctuate each word as they think aloud or skip some words entirely. Their speech patterns are not beholden to common cadences familiar to systems built for adult users. As adults, we have learned how to best interact with these devices, how to elicit the best response. We straighten ourselves up, we formulate the request in our heads, modify it based on learned behavior and we speak our requests out loud, inhale a deep breath … “Alexa … ” Kids simply blurt out their unthought out requests as if Siri or Alexa were human, and more often than not get an erroneous or canned response.

In an educational setting, these challenges are exacerbated by the fact that speech recognition must grapple with not just ambient noise and the unpredictability of the classroom, but changes in a child’s speech throughout the year, and the multiplicity of accents and dialects in a typical elementary school. Physical, language and behavioral differences between kids and adults also increase dramatically the younger the child. That means that young learners, who stand to benefit most from speech recognition, are the most difficult for developers to build for.

To account for and understand the highly varied quirks of children’s language requires speech recognition systems built to intentionally learn from the ways kids speak. Children’s speech cannot be treated simply as just another accent or dialect for speech recognition to accommodate; it’s fundamentally and practically different, and it changes as children grow and develop physically as well as in language skills.

Unlike most consumer contexts, accuracy has profound implications for children. A system that tells a kid they are wrong when they are right (false negative) damages their confidence; that tells them they are right when they are wrong (false positive) risks socioemotional (and psychometric) harm. In an entertainment setting, in apps, gaming, robotics and smart toys, these false negatives or positives lead to frustrating experiences. In schools, errors, misunderstanding or canned responses can have far more profound educational — and equity — implications.

Well-documented bias in speech recognition can, for example, have pernicious effects with children. It is not acceptable for a product to work with poorer accuracy — delivering false positives and negatives — for kids of a certain demographic or socioeconomic background. A growing body of research suggests that voice can be an extremely valuable interface for kids but we cannot allow or ignore the potential for it to magnify already endemic biases and inequities in our schools.

Speech recognition has the potential to be a powerful tool for kids at home and in the classroom. It can fill critical gaps in supporting children through the stages of literacy and language learning, helping kids better understand — and be understood by — the world around them. It can pave the way for a new era of  “invisible” observational measures that work reliably, even in a remote setting. But most of today’s speech recognition tools are ill-suited to this goal. The technologies found in Siri, Alexa and other voice assistants have a job to do — to understand adults who speak clearly and predictably — and, for the most part, they do that job well. If speech recognition is to work for kids, it has to be modeled for, and respond to, their unique voices, language and behaviors.