EAGLE-vs-FALCON2

Oculii looks to supercharge radar for autonomy with $55M round B

Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55 million round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Lightmatter_RackDetailBladeOut

Lightmatter’s photonic AI ambitions light up an $80M B round

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80 million round, the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised million to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

In a nutshell, Lightmatter’s chips perform in a flash — literally — certain complex calculations fundamental to machine learning. Instead of using charge, logic gates and transistors to record and manipulate data, the chips use photonic circuits that perform the calculations by manipulating the path of light. It’s been possible for years, but until recently getting it to work at scale, and for a practical, indeed a highly valuable purpose, has not.

Prototype to product

It wasn’t entirely clear in 2018 when Lightmatter was getting off the ground whether this tech would be something they could sell to replace more traditional compute clusters like the thousands of custom units companies like Google and Amazon use to train their AIs.

“We knew in principle the tech should be great, but there were a lot of details we needed to figure out,” CEO and co-founder Nick Harris told TechCrunch in an interview. “Lots of hard theoretical computer science and chip design challenges we needed to overcome… and COVID was a beast.”

With suppliers out of commission and many in the industry pausing partnerships, delaying projects and other things, the pandemic put Lightmatter months behind schedule, but they came out the other side stronger. Harris said that the challenges of building a chip company from the ground up were substantial, if not unexpected.

Image Credits: Lightmatter

“In general what we’re doing is pretty crazy,” he admitted. “We’re building computers from nothing. We design the chip, the chip package, the card the chip package sits on, the system the cards go in, and the software that runs on it…. we’ve had to build a company that straddles all this expertise.”

That company has grown from its handful of founders to more than 70 employees in Mountain View and Boston, and the growth will continue as it brings its new product to market.

Where a few years ago Lightmatter’s product was more of a well-informed twinkle in the eye, now it has taken a more solid form in the Envise, which they call a “general-purpose photonic AI accelerator.” It’s a server unit designed to fit into normal data center racks but equipped with multiple photonic computing units, which can perform neural network inference processes at mind-boggling speeds. (It’s limited to certain types of calculations, namely linear algebra for now, and not complex logic, but this type of math happens to be a major component of machine learning processes.)

Harris was reticent to provide exact numbers on performance improvements, but more because those improvements are increasing than that they’re not impressive enough. The website suggests it’s 5x faster than an Nvidia A100 unit on a large transformer model like BERT, while using about 15% of the energy. That makes the platform doubly attractive to deep-pocketed AI giants like Google and Amazon, which constantly require both more computing power and who pay through the nose for the energy required to use it. Either better performance or lower energy cost would be great — both together is irresistible.

It’s Lightmatter’s initial plan to test these units with its most likely customers by the end of 2021, refining it and bringing it up to production levels so it can be sold widely. But Harris emphasized this was essentially the Model T of their new approach.

“If we’re right, we just invented the next transistor,” he said, and for the purposes of large-scale computing, the claim is not without merit. You’re not going to have a miniature photonic computer in your hand any time soon, but in data centers, where as much as 10% of the world’s power is predicted to go by 2030, “they really have unlimited appetite.”

The color of math

Image Credits: Lightmatter

There are two main ways by which Lightmatter plans to improve the capabilities of its photonic computers. The first, and most insane-sounding, is processing in different colors.

It’s not so wild when you think about how these computers actually work. Transistors, which have been at the heart of computing for decades, use electricity to perform logic operations, opening and closing gates and so on. At a macro scale you can have different frequencies of electricity that can be manipulated like waveforms, but at this smaller scale it doesn’t work like that. You just have one form of currency, electrons, and gates are either open or closed.

In Lightmatter’s devices, however, light passes through waveguides that perform the calculations as it goes, simplifying (in some ways) and speeding up the process. And light, as we all learned in science class, comes in a variety of wavelengths — all of which can be used independently and simultaneously on the same hardware.

The same optical magic that lets a signal sent from a blue laser be processed at the speed of light works for a red or a green laser with minimal modification. And if the light waves don’t interfere with one another, they can travel through the same optical components at the same time without losing any coherence.

Image Credits: Lightmatter

That means that if a Lightmatter chip can do, say, a million calculations a second using a red laser source, adding another color doubles that to two million, adding another makes three — with very little in the way of modification needed. The chief obstacle is getting lasers that are up to the task, Harris said. Being able to take roughly the same hardware and near-instantly double, triple or 20x the performance makes for a nice roadmap.

It also leads to the second challenge the company is working on clearing away, namely interconnect. Any supercomputer is composed of many small individual computers, thousands and thousands of them, working in perfect synchrony. In order for them to do so, they need to communicate constantly to make sure each core knows what other cores are doing, and otherwise coordinate the immensely complex computing problems supercomputing is designed to take on. (Intel talks about this “concurrency” problem building an exa-scale supercomputer here.)

“One of the things we’ve learned along the way is, how do you get these chips to talk to each other when they get to the point where they’re so fast that they’re just sitting there waiting most of the time?” said Harris. The Lightmatter chips are doing work so quickly that they can’t rely on traditional computing cores to coordinate between them.

A photonic problem, it seems, requires a photonic solution: a wafer-scale interconnect board that uses waveguides instead of fiber optics to transfer data between the different cores. Fiber connections aren’t exactly slow, of course, but they aren’t infinitely fast, and the fibers themselves are actually fairly bulky at the scales chips are designed, limiting the number of channels you can have between cores.

“We built the optics, the waveguides, into the chip itself; we can fit 40 waveguides into the space of a single optical fiber,” said Harris. “That means you have way more lanes operating in parallel — it gets you to absurdly high interconnect speeds.” (Chip and server fiends can find that specs here.)

The optical interconnect board is called Passage, and will be part of a future generation of its Envise products — but as with the color calculation, it’s for a future generation. Five-10x performance at a fraction of the power will have to satisfy their potential customers for the present.

Putting that $80M to work

Those customers, initially the “hyper-scale” data handlers that already own data centers and supercomputers that they’re maxing out, will be getting the first test chips later this year. That’s where the B round is primarily going, Harris said: “We’re funding our early access program.”

That means both building hardware to ship (very expensive per unit before economies of scale kick in, not to mention the present difficulties with suppliers) and building the go-to-market team. Servicing, support and the immense amount of software that goes along with something like this — there’s a lot of hiring going on.

The round itself was led by Viking Global Investors, with participation from HP Enterprise, Lockheed Martin, SIP Global Partners, and previous investors GV, Matrix Partners and Spark Capital. It brings their total raised to about $113 million; There was the initial $11 million A round, then GV hopping on with a $22 million A-1, then this $80 million.

Although there are other companies pursuing photonic computing and its potential applications in neural networks especially, Harris didn’t seem to feel that they were nipping at Lightmatter’s heels. Few if any seem close to shipping a product, and at any rate this is a market that is in the middle of its hockey stick moment. He pointed to an OpenAI study indicating that the demand for AI-related computing is increasing far faster than existing technology can provide it, except with ever larger data centers.

The next decade will bring economic and political pressure to rein in that power consumption, just as we’ve seen with the cryptocurrency world, and Lightmatter is poised and ready to provide an efficient, powerful alternative to the usual GPU-based fare.

As Harris suggested hopefully earlier, what his company has made is potentially transformative in the industry, and if so there’s no hurry — if there’s a gold rush, they’ve already staked their claim.

 

EEG-lab

Cognixion’s brain-monitoring headset enables fluid communication for people with severe disabilities

Of the many frustrations of having a severe motor impairment, the difficulty of communicating must surely be among the worst. The tech world has not offered much succor to those affected by things like locked-in syndrome, ALS and severe strokes, but startup Cognixion aims to with a novel form of brain monitoring that, combined with a modern interface, could make speaking and interaction far simpler and faster.

The company’s Cognixion One headset tracks brain activity closely in such a way that the wearer can direct a cursor — reflected on a visor like a heads-up display — in multiple directions, or select from various menus and options. No physical movement is needed, and with the help of modern voice interfaces like Alexa, the user can not only communicate efficiently but freely access all kinds of information and content most people take for granted.

But it’s not a miracle machine, and it isn’t a silver bullet. Here’s how it got started.

Overhauling decades-old brain tech

Everyone with a motor impairment has different needs and capabilities, and there are a variety of assistive technologies that cater to many of these needs. But many of these techs and interfaces are years or decades old — medical equipment that hasn’t been updated for an era of smartphones and high-speed mobile connections.

Some of the most dated interfaces, unfortunately, are those used by people with the most serious limitations: those whose movements are limited to their heads, faces, eyes — or even a single eyelid, like Jean-Dominique Bauby, the famous author of “The Diving Bell and the Butterfly.”

One of the tools in the toolbox is the electroencephalogram, or EEG, which involves detecting activity in the brain via patches on the scalp that record electrical signals. But while they’re useful in medicine and research in many ways, EEGs are noisy and imprecise — more for finding which areas of the brain are active than, say, which sub-region of the sensory cortex or the like. And of course you have to wear a shower cap wired with electrodes (often greasy with conductive gel) — it’s not the kind of thing anyone wants to do for more than an hour, let alone all day every day.

Yet even among those with the most profound physical disabilities, cognition is often unimpaired — as indeed EEG studies have helped demonstrate. It made Andreas Forsland, co-founder and CEO of Cognixion, curious about further possibilities for the venerable technology: “Could a brain-computer interface using EEG be a viable communication system?”

He first used EEG for assistive purposes in a research study some five years ago. They were looking into alternative methods of letting a person control an on-screen cursor, among them an accelerometer for detecting head movements, and tried integrating EEG readings as another signal. But it was far from a breakthrough.

A modern lab with an EEG cap wired to a receiver and laptop — this is an example of how EEG is commonly used. Image Credits: BSIP/Universal Images Group via Getty Images

He ran down the difficulties: “With a read-only system, the way EEG is used today is no good; other headsets have slow sample rates and they’re not accurate enough for a real-time interface. The best BCIs are in a lab, connected to wet electrodes — it’s messy, it’s really a non-starter. So how do we replicate that with dry, passive electrodes? We’re trying to solve some very hard engineering problems here.”

The limitations, Forsland and his colleagues found, were not so much with the EEG itself as with the way it was carried out. This type of brain monitoring is meant for diagnosis and study, not real-time feedback. It would be like taking a tractor to a drag race. Not only do EEGs often work with a slow, thorough check of multiple regions of the brain that may last several seconds, but the signal it produces is analyzed by dated statistical methods. So Cognixion started by questioning both practices.

Improving the speed of the scan is more complicated than overclocking the sensors or something. Activity in the brain must be inferred by collecting a certain amount of data. But that data is collected passively, so Forsland tried bringing an active element into it: a rhythmic electric stimulation that is in a way reflected by the brain region, but changed slightly depending on its state — almost like echolocation.

The Cognixion One headset with its dry EEG terminals visible. Image Credits: Cognixion

They detect these signals with a custom set of six EEG channels in the visual cortex area (up and around the back of your head), and use a machine learning model to interpret the incoming data. Running a convolutional neural network locally on an iPhone — something that wasn’t really possible a couple years ago — the system can not only tease out a signal in short order but make accurate predictions, making for faster and smoother interactions.

The result is sub-second latency with 95-100% accuracy in a wireless headset powered by a mobile phone. “The speed, accuracy and reliability are getting to commercial levels — we can match the best in class of the current paradigm of EEGs,” said Forsland.

Dr. William Goldie, a clinical neurologist who has used and studied EEGs and other brain monitoring techniques for decades (and who has been voluntarily helping Cognixion develop and test the headset), offered a positive evaluation of the technology.

“There’s absolutely evidence that brainwave activity responds to thinking patterns in predictable ways,” he noted. This type of stimulation and response was studied years ago. “It was fascinating, but back then it was sort of in the mystery magic world. Now it’s resurfacing with these special techniques and the computerization we have these days. To me it’s an area that’s opening up in a manner that I think clinically could be dramatically effective.”

BCI, meet UI

The first thing Forsland told me was “We’re a UI company.” And indeed even such a step forward in neural interfaces as he later described means little if it can’t be applied to the problem at hand: helping people with severe motor impairment to express themselves quickly and easily.

Sad to say, it’s not hard to imagine improving on the “competition,” things like puff-and-blow tubes and switches that let users laboriously move a cursor right, right a little more, up, up a little more, then click: a letter! Gaze detection is of course a big improvement over this, but it’s not always an option (eyes don’t always work as well as one would like) and the best eye-tracking solutions (like a Tobii Dynavox tablet) aren’t portable.

Why shouldn’t these interfaces be as modern and fluid as any other? The team set about making a UI with this and the capabilities of their next-generation EEG in mind.

Image Credits: Cognixion

Their solution takes bits from the old paradigm and combines them with modern virtual assistants and a radial design that prioritizes quick responses and common needs. It all runs in an app on an iPhone, the display of which is reflected in a visor, acting as a HUD and outward-facing display.

In easy reach of, not to say a single thought but at least a moment’s concentration or a tilt of the head, are everyday questions and responses — yes, no, thank you, etc. Then there are slots to put prepared speech into — names, menu orders and so on. And then there’s a keyboard with word- and sentence-level prediction that allows common words to be popped in without spelling them out.

“We’ve tested the system with people who rely on switches, who might take 30 minutes to make 2 selections. We put the headset on a person with cerebral palsy, and she typed our her name and hit play in 2 minutes,” Forsland said. “It was ridiculous, everyone was crying.”

Goldie noted that there’s something of a learning curve. “When I put it on, I found that it would recognize patterns and follow through on them, but it also sort of taught patterns to me. You’re training the system, and it’s training you — it’s a feedback loop.”

“I can be the loudest person in the room”

One person who has found it extremely useful is Chris Benedict, a DJ, public speaker and disability advocate who himself has Dyskinetic Cerebral Palsy. It limits his movements and ability to speak, but doesn’t stop him from spinning (digital) records at various engagements, however, or from explaining his experience with the headset over email. (And you can see him demonstrating it in person in the video above.)

Image Credits: Cognixion

“Even though it’s not a tool that I’d need all the time it’s definitely helpful in aiding my communication,” he told me. “Especially when I need to respond quickly or am somewhere that is noisy, which happens often when you are a DJ. If I wear it with a Bluetooth speaker I can be the loudest person in the room.” (He always has a speaker on hand, since “you never know when you might need some music.”)

The benefits offered by the headset give some idea of what is lacking from existing assistive technology (and what many people take for granted).

“I can use it to communicate, but at the same time I can make eye contact with the person I’m talking to, because of the visor. I don’t have to stare at a screen between me and someone else. This really helps me connect with people,” Benedict explained.

“Because it’s a headset I don’t have to worry about getting in and out of places, there is no extra bulk added to my chair that I have to worry about getting damaged in a doorway. The headset is balanced too, so it doesn’t make my head lean back or forward or weigh my neck down,” he continued. “When I set it up to use the first time it had me calibrate, and it measured my personal range of motion so the keyboard and choices fit on the screen specifically for me. It can also be recalibrated at any time, which is important because not every day is my range of motion the same.”

Alexa, which has been extremely helpful to people with a variety of disabilities due to its low cost and wide range of compatible devices, is also part of the Cognixion interface, something Benedict appreciates, having himself adopted the system for smart home and other purposes. “With other systems this isn’t something you can do, or if it is an option, it’s really complicated,” he said.

Next steps

As Benedict demonstrates, there are people for whom a device like Cognixion’s makes a lot of sense, and the hope is it will be embraced as part of the necessarily diverse ecosystem of assistive technology.

Forsland said that the company is working closely with the community, from users to clinical advisors like Goldie and other specialists, like speech therapists, to make the One headset as good as it can be. But the hurdle, as with so many devices in this class, is how to actually put it on people’s heads — financially and logistically speaking.

Cognixion is applying for FDA clearance to get the cost of the headset — which, being powered by a phone, is not as high as it would be with an integrated screen and processor — covered by insurance. But in the meantime the company is working with clinical and corporate labs that are doing neurological and psychological research. Places where you might find an ordinary, cumbersome EEG setup, in other words.

The company has raised funding and is looking for more (hardware development and medical pursuits don’t come cheap), and has also collected a number of grants.

The Cognixion One headset may still be some years away from wider use (the FDA is never in a hurry), but that allows the company time to refine the device and include new advances. Unlike many other assistive devices, for example a switch or joystick, this one is largely software-limited, meaning better algorithms and UI work will significantly improve it. While many wait for companies like Neuralink to create a brain-computer interface for the modern era, Cognixion has already done so for a group of people who have much more to gain from it.

You can learn more about the Cognixion One headset and sign up to receive the latest at its site here.

Sony announces investment and partnership with Discord to bring the chat app to PlayStation

Sony and Discord have announced a partnership that will integrate the latter’s popular gaming-focused chat app with PlayStation’s own built-in social tools. It’s a big move and a fairly surprising one given how recently acquisition talks were in the air — Sony appears to have offered a better deal than Microsoft, taking an undisclosed minority stake in the company ahead of a rumored IPO.

The exact nature of the partnership is not expressed in the brief announcement post. The closest we come to hearing what will actually happen is that the two companies plan to “bring the Discord and PlayStation experiences closer together on console and mobile starting early next year,” which at least is easy enough to imagine.

Discord has partnered with console platforms before, though its deal with Microsoft was not a particularly deep integration. This is almost certainly more than a “friends can see what you’re playing on PS5” and more of a “this is an alternative chat infrastructure for anyone on a Sony system.” Chances are it’ll be a deep, system-wide but clearly Discord-branded option — such as “Start a voice chat with Discord” option when you invite a friend to your game or join theirs.

The timeline of early 2022 also suggests that this is a major product change, probably coinciding with a big platform update on Sony’s long-term PS5 roadmap.

While the new PlayStation is better than the old one when it comes to voice chat, the old one wasn’t great to begin with, and Discord is not just easier to use but something millions of gamers already do use daily. And these days, if a game isn’t an exclusive, being robustly cross-platform is the next best option — so PS5 players being able to seamlessly join and chat with PC players will reduce a pain point there.

Of course Microsoft has its own advantages, running both the Xbox and Windows ecosystems, but it has repeatedly fumbled this opportunity and the acquisition of Discord might have been the missing piece that tied it all together. That bird has flown, of course, and while Microsoft’s acquisition talks reportedly valued Discord at some $10 billion, it seems the growing chat app decided it would rather fly free with an IPO and attempt to become the dominant voice platform everywhere rather than become a prized pet.

Sony has done its part, financially speaking, by taking part in Discord’s recent $100 million H round. The amount they contributed is unknown, but perforce it can’t be more than a small minority stake, given how much the company has taken on and its total valuation.

Facebook is buying the developer behind VR shooter ‘Onward’

After a steady stream of studio acquisitions in late 2019 and early 2020, Facebook has been a little quieter in recent months when its came to bulking up its VR content arm.

Today, the social media giant breaks that stream, announcing their acquisition of Downpour Interactive, the developer of the popular VR first-person shooter Onward. The title, which is available on the company’s Rift and Quest platforms, as well as through Valve’s Steam store, has been among virtual reality’s top sellers in recent years.

Facebook says that the title will continue to be available on non-Facebook VR hardware going forward.

It’s an interesting deal, particularly after the company’s recent attempt to create an ambitious first-person shooter of its own, partnering with Apex Legends developer Respawn Entertainment and dumping millions into a Medal of Honor VR title that was tepidly received among reviewers after its release this past December.

Facebook didn’t share terms of the Downpour deal, though they noted that the entire team will be joining Oculus Studios. In a blog post detailing the deal, Mike Verdu, Facebook’s VP of AR/VR Content, called Onward a “multiplayer masterpiece.”

Apple sales bounce back in China as Huawei loses smartphone crown

Huawei’s smartphone rivals in China are quickly divvying up the market share it has lost over the past year.

Indeed, 92.4 million units of smartphones were shipped in China during the first quarter, with Vivo claiming the crown with a 23% share and its sister company Oppo following closely behind with 22%, according to market research firm Canalys. Huawei, of which smartphone sales took a hit after U.S. sanctions cut key chip parts off its supply chain, came in third at 16%. Xiaomi and Apple took the fourth and fifth spot, respectively.

All major smartphone brands but Huawei saw a jump in their market share in China from Q1 2020. Apple’s net sales in Greater China nearly doubled year-over-year to $17.7 billion in the three months ended March, a quarter of all-time record revenue for the American giant, according to its latest financial results.

“We’ve been especially pleased by the customer response in China to the iPhone 12 family,” said Tim Cook during an earnings call this week. “You have to remember that China entered the shutdown phase earlier in Q2 of last year than other countries. And so they were relatively more affected in that quarter, and that has to be taken into account as you look at the results.”

Huawei’s share shrunk from a dominant 41% to 16% in a year’s time, though the telecom equipment giant managed to increase its profit margin partly thanks to slashed costs. In November, it sold off its budget phone line Honor.

This quarter is also the first time China’s smartphone market has grown in four years, with a growth rate of 27%, according to Canalys.

“Leading vendors are racing to the top of the market, and there was an unusually high number of smartphone launches this quarter compared with Q1 2020 or even Q4 2020,” said Canalys analyst Amber Liu.

“Huawei’s sanctions and Honor’s divestiture have been hallmarks of this new market growth, as consumers and channels become more open to alternative brands.”

 

UK’s IoT ‘security by design’ law will cover smartphones too

Smartphones will be included in the scope of a planned “security by design” U.K. law aimed at beefing up the security of consumer devices, the government said today.

It made the announcement in its response to a consultation on legislative plans aimed at tackling some of the most lax security practices long-associated with the Internet of Things (IoT).

The government introduced a security code of practice for IoT device manufacturers back in 2018 — but the forthcoming legislation is intended to build on that with a set of legally binding requirements.

A draft law was aired by ministers in 2019 — with the government focused on IoT devices, such as webcams and baby monitors, which have often been associated with the most egregious device security practices.

Its plan now is for virtually all smart devices to be covered by legally binding security requirements, with the government pointing to research from consumer group “Which?” that found that a third of people kept their last phone for four years, while some brands only offer security updates for just over two years.

The forthcoming legislation will require smartphone and device makers like Apple and Samsung to inform customers of the duration of time for which a device will receive software updates at the point of sale.

It will also ban manufacturers from using universal default passwords (such as “password” or “admin”), which are often preset in a device’s factory settings and easily guessable — making them meaningless in security terms.

California already passed legislation banning such passwords in 2018 with the law coming into force last year.

Under the incoming U.K. law, manufacturers will additionally be required to provide a public point of contact to make it simpler for anyone to report a vulnerability.

The government said it will introduce legislation as soon as parliamentary time allows.

Commenting in a statement, digital infrastructure minister Matt Warman added: “Our phones and smart devices can be a gold mine for hackers looking to steal data, yet a great number still run older software with holes in their security systems.

“We are changing the law to ensure shoppers know how long products are supported with vital security updates before they buy and are making devices harder to break into by banning easily guessable default passwords.

“The reforms, backed by tech associations around the world, will torpedo the efforts of online criminals and boost our mission to build back safer from the pandemic.”

A DCMS spokesman confirmed that laptops, PCs and tablets with no cellular connection will not be covered by the law, nor will secondhand products. Although he added that the intention is for the scope to be adaptive, to ensure the law can keep pace with new threats that may emerge around devices.

nagoya-cells

Deep Science: Introspective, detail-oriented and disaster-chasing AIs

Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

apple_new-imac-spring21_magic-keyboard-with-numeric-keypad-yellow_04202021

Apple brings Touch ID to the Magic Keyboard

Apple has unveiled a new, colorful iMac today with an Apple-designed M1 chip. But that was just part of the story as the company used that opportunity to release new Mac accessories. In addition to a Magic Trackpad and a Magic Mouse with multiple color options, Apple is bringing Touch ID to desktop Macs with a new Magic Keyboard.

Touch ID on desktop works as expected. There’s a fingerprint sensor located at the top right of the keyboard. It replaces the ‘Eject’ key that you can find on existing Apple keyboards. It lets you unlock your computer, pay with Apple Pay, unlock a password manager and more.

Interestingly, Touch ID works wirelessly, which means that you don’t have to connect your keyboard to your Mac with a Lightning cable. There’s a dedicated security component built in the keyboard. It communicates directly with the Secure Enclave in the M1, which means that it only works with modern Mac computers with an M1 chip. It’s going to be interesting to see the security implementation of this new take on Touch ID.

Customers can choose between three keyboard models when they buy a new iMac. Some iMac models probably don’t come with Touch ID by default. You may be able to buy the keyboard separately, but we’ll have to wait for the event to end to find out how much the new keyboard costs.

Image Credits: Apple