20201002_154937

Alba Orbital’s mission to image the Earth every 15 minutes brings in $3.4M seed round

Orbital imagery is in demand, and if you think having daily images of everywhere on Earth is going to be enough in a few years, you need a lesson in ambition. Alba Orbital is here to provide it with its intention to provide Earth observation at intervals of 15 minutes rather than hours or days — and it just raised $3.4 million to get its next set of satellites into orbit.

Alba attracted our attention at Y Combinator’s latest demo day; I was impressed with the startup’s accomplishment of already having six satellites in orbit, which is more than most companies with space ambition ever get. But it’s only the start for the company, which will need hundreds more to begin to offer its planned high-frequency imagery.

The Scottish company has spent the last few years in prep and R&D, pursuing the goal, which some must have thought laughable, of creating a solar-powered Earth observation satellite that weighs in at less than one kilogram. The joke’s on the skeptics, however — Alba has launched a proof of concept and is ready to send the real thing up as well.

Little more than a flying camera with a minimum of storage, communication, power and movement, the sub-kilogram Unicorn-2 is about the size of a soda can, with paperback-size solar panel wings, and costs in the neighborhood of $10,000. It should be able to capture up to 10-meter resolution, good enough to see things like buildings, ships, crops, even planes.

Image Credits: Alba Orbital

“People thought we were idiots. Now they’re taking it seriously,” said Tom Walkinshaw, founder and CEO of Alba. “They can see it for what it is: a unique platform for capturing data sets.”

Indeed, although the idea of daily orbital imagery like Planet’s once seemed excessive, in some situations it’s quite clearly not enough.

“The California case is probably wildfires,” said Walkinshaw (and it always helps to have a California case). “Having an image once a day of a wildfire is a bit like having a chocolate teapot… not very useful. And natural disasters like hurricanes, flooding is a big one, transportation as well.”

Walkinshaw noted that they company was bootstrapped and profitable before taking on the task of launching dozens more satellites, something the seed round will enable.

“It gets these birds in the air, gets them finished and shipped out,” he said. “Then we just need to crank up the production rate.”

Image Credits: Alba Orbital

When I talked to Walkinshaw via video call, 10 or so completed satellites in their launch shells were sitting on a rack behind him in the clean room, and more are in the process of assembly. Aiding in the scaling effort is new investor James Park, founder and CEO of Fitbit — definitely someone who knows a little bit about bringing hardware to market.

Interestingly, the next batch to go to orbit (perhaps as soon as in a month or two, depending on the machinations of the launch provider) will be focusing on nighttime imagery, an area Walkinshaw suggested was undervalued. But as orbital thermal imaging startup Satellite Vu has shown, there’s immense appetite for things like energy and activity monitoring, and nighttime observation is a big part of that.

The seed round will get the next few rounds of satellites into space, and after that Alba will be working on scaling manufacturing to produce hundreds more. Once those start going up it can demonstrate the high-cadence imaging it is aiming to produce — for now it’s impossible to do so, though Alba already has customers lined up to buy the imagery it does get.

The round was led by Metaplanet Holdings, with participation by Y Combinator, Liquid2, Soma, Uncommon Denominator, Zillionize and numerous angels.

As for competition, Walkinshaw welcomes it, but feels secure that he and his company have more time and work invested in this class of satellite than anyone in the world — a major obstacle for anyone who wants to do battle. It’s more likely companies will, as Alba has done, pursue a distinct product complementary to those already or in the process of being offered.

“Space is a good place to be right now,” he concluded.

CMU researchers show potential of privacy-preserving activity tracking using radar

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

 

gameboard-2

The Last Gameboard raises $4M to ship its digital tabletop gaming platform

The tabletop gaming industry has exploded over the last few years as millions discovered or rediscovered its joys, but it too is evolving — and The Last Gameboard hopes to be the venue for that evolution. The digital tabletop platform has progressed from crowdfunding to $4 million seed round, and having partnered with some of the biggest names in the industry, plans to ship by the end of the year.

As the company’s CEO and co-founder Shail Mehta explained in a TC Early Stage pitch-off earlier this year, The Last Gameboard is a 16-inch square touchscreen device with a custom OS and a sophisticated method of tracking game pieces and hand movements. The idea is to provide a digital alternative to physical games where that’s practical, and do so with the maximum benefit and minimum compromise.

If the pitch sounds familiar… it’s been attempted once or twice before. I distinctly remember being impressed by the possibilities of D&D on an original Microsoft Surface… back in 2009. And I played with another at PAX many years ago. Mehta said that until very recently there simply wasn’t the technology and the market wasn’t ready.

“People tried this before, but it was either way too expensive or they didn’t have the audience. And the tech just wasn’t there; they were missing that interaction piece,” she explained, and certainly any player will recognize that the, say, iPad version of a game definitely lacks physicality. The advance her company has achieved is in making the touchscreen able to detect not just taps and drags, but game pieces, gestures and movements above the screen, and more.

“What Gameboard does, no other existing touchscreen or tablet on the market can do — it’s not even close,” Mehta said. “We have unlimited touch, game pieces, passive and active… you can use your chess set at home, lift up and put down the pieces, we track it the whole time. We can do unique identifiers with tags and custom shapes. It’s the next step in how interactive surfaces can be.”

It’s accomplished via a not particularly exotic method, which saves the Gameboard from the fate of the Surface and its successors, which cost several thousand dollars due to their unique and expensive makeups. Mehta explained that they work strictly with ordinary capacitive touch data, albeit at a higher framerate than is commonly used, and then use machine learning to characterize and track object outlines. “We haven’t created a completely new mechanism, we’re just optimizing what’s available today,” she said.

Image Credits: The Last Gameboard

At $699 for the Gameboard it’s not exactly an impulse buy, either, but the fact of the matter is people spend a lot of money on gaming, with some titles running into multiple hundreds of dollars for all the expansions and pieces. Tabletop is now a more than $20 billion industry. If the experience is as good as they hope to make it, this is an investment many a player will not hesitate (much, anyway) to make.

Of course, the most robust set of gestures and features won’t matter if all they had on the platform were bargain-bin titles and grandpa’s-parlor favorites like “Parcheesi.” Fortunately, The Last Gameboard has managed to stack up some of the most popular tabletop companies out there, and aims to have the definitive digital edition for their games.

Asmodee Digital is probably the biggest catch, having adapted many of today’s biggest hits, from modern classics “Catan” and “Carcassonne” to crowdfunded breakout hit “Scythe” and immense dungeon-crawler “Gloomhaven.” The full list of partners right now includes Dire Wolf Digital, Nomad Games, Auroch Digital, Restoration Games, Steve Jackson Games, Knights of Unity, Skyship Studios, EncounterPlus, PlannarAlly and Sugar Gamers, as well as individual creators and developers.

Image Credits: The Last Gameboard

These games may be best played in person, but have successfully transitioned to digital versions, and one imagines that a larger screen and inclusion of real pieces could make for an improved hybrid experience. There will be options both to purchase games individually, like you might on mobile or Steam, or to subscribe to an unlimited access model (pricing to be determined on both).

It would also be something that the many gaming shops and playing venues might want to have a couple of on hand. Testing out a game in-store and then buying a few to stock, or convincing consumers to do the same, could be a great sales tactic for all involved.

In addition to providing a unique and superior digital version of a game, the device can connect with others to trade moves, send game invites and all that sort of thing. The whole OS, Mehta said, “is alive and real. If we didn’t own it and create it, this wouldn’t work.” This is more than a skin on top of Android with a built-in store, but there’s enough shared that Android-based ports will be able to be brought over with little fuss.

Head of content Lee Allentuck suggested that the last couple years (including the pandemic) have started to change game developers’ and publishers’ minds about the readiness of the industry for what’s next. “They see the digital crossover is going to happen — people are playing online board games now. If you can be part of that new trend at the very beginning, it gives you a big opportunity,” he said.

CEO Shail Mehta (center) plays Stop Thief on the Gameboard with others on the team. Image Credits: The Last Gameboard

Allentuck, who previously worked at Hasbro, said there’s widespread interest in the toy and tabletop industry to be more tech-forward, but there’s been a “chicken and egg scenario,” where there’s no market because no one innovates, and no one innovates because there’s no market. Fortunately things have progressed to the point where a company like The Last Gameboard can raise $4 million to help cover the cost of creating that market.

The round was led by TheVentureCity, with participation from SOSV, Riot Games, Conscience VC, Corner3 VC and others. While the company didn’t go to HAX’s Shenzhen program as planned, they are still HAX-affiliated. SOSV partner Garrett Winther gave a glowing recommendation of its approach: “They are the first to effectively tie collaborative physical and digital gameplay together while not losing the community, storytelling or competitive foundations that we all look for in gaming.”

Mehta noted that the pandemic nearly cooked the company by derailing their funding, which was originally supposed to come through around this time last year when everything went pear-shaped. “We had our functioning prototype, we had filed for a patent, we got the traction, we were gonna raise, everything was great… and then COVID hit,” she recalled. “But we got a lot of time to do R&D, which was actually kind of a blessing. Our team was super small so we didn’t have to lay anyone off — we just went into survival mode for like six months and optimized, developed the platform. 2020 was rough for everyone, but we were able to focus on the core product.”

Now the company is poised to start its beta program over the summer and (following feedback from that) ship its first production units before the holiday season when purchases like this one seem to make a lot of sense.

(This article originally referred to this raise as The Last Gameboard’s round A — it’s actually the seed. This has been updated.)

Someone already turned Apple’s AirTag into a slim, wallet-friendly card

Apple’s new AirTag item trackers are pretty small, but not quite small enough to slip into most wallets without adding an obvious bit of bulk.

Fortunately, as one talented AirTag owner has found, that’s nothing you can’t fix with a heat gun, a bit of soldering and an understanding that you could totally fry your shiny new AirTag in the blink of an eye. Oh, and a 3D printer.

When Andrew Ngai realized that much of AirTag’s thickness came from its PCB and its battery being stacked atop each other, he set out to instead arrange them side-by-side. With the help of some iFixit guides (which, by the way, provide an awesome peek inside the AirTag if you’re curious what’s in there but aren’t looking to dissect one yourself), Andrew tore the AirTag down to its key components. After making sure everything still worked in its freshly disassembled state, he 3D printed a new case, soldered in wires to connect the board to the battery at a distance, and put everything back together. Success! And he did it all within just days of AirTag being released.

While this sort of project requires a pretty broad set of skills to pull off, Andrew has kindly handled one of the steps for anyone looking to take it on: he’s uploaded the STL file for the 3D-printed card holder as a free download on Thingiverse(Or you could, of course, just buy a Tile Slim. But that doesn’t involve soldering irons and 3D printing, so where’s the fun in that?)

[via 9to5mac]
EAGLE-vs-FALCON2

Oculii looks to supercharge radar for autonomy with $55M round B

Autonomous vehicles rely on many sensors to perceive the world around them, and while cameras and lidar get a lot of the attention, good old radar is an important piece of the puzzle — though it has some fundamental limitations. Oculii, which just raised a $55 million round, aims to minimize those limitations and make radar more capable with a smart software layer for existing devices — and sell its own as well.

Lightmatter_RackDetailBladeOut

Lightmatter’s photonic AI ambitions light up an $80M B round

AI is fundamental to many products and services today, but its hunger for data and computing cycles is bottomless. Lightmatter plans to leapfrog Moore’s law with its ultra-fast photonic chips specialized for AI work, and with a new $80 million round, the company is poised to take its light-powered computing to market.

We first covered Lightmatter in 2018, when the founders were fresh out of MIT and had raised million to prove that their idea of photonic computing was as valuable as they claimed. They spent the next three years and change building and refining the tech — and running into all the hurdles that hardware startups and technical founders tend to find.

For a full breakdown of what the company’s tech does, read that feature — the essentials haven’t changed.

In a nutshell, Lightmatter’s chips perform in a flash — literally — certain complex calculations fundamental to machine learning. Instead of using charge, logic gates and transistors to record and manipulate data, the chips use photonic circuits that perform the calculations by manipulating the path of light. It’s been possible for years, but until recently getting it to work at scale, and for a practical, indeed a highly valuable purpose, has not.

Prototype to product

It wasn’t entirely clear in 2018 when Lightmatter was getting off the ground whether this tech would be something they could sell to replace more traditional compute clusters like the thousands of custom units companies like Google and Amazon use to train their AIs.

“We knew in principle the tech should be great, but there were a lot of details we needed to figure out,” CEO and co-founder Nick Harris told TechCrunch in an interview. “Lots of hard theoretical computer science and chip design challenges we needed to overcome… and COVID was a beast.”

With suppliers out of commission and many in the industry pausing partnerships, delaying projects and other things, the pandemic put Lightmatter months behind schedule, but they came out the other side stronger. Harris said that the challenges of building a chip company from the ground up were substantial, if not unexpected.

Image Credits: Lightmatter

“In general what we’re doing is pretty crazy,” he admitted. “We’re building computers from nothing. We design the chip, the chip package, the card the chip package sits on, the system the cards go in, and the software that runs on it…. we’ve had to build a company that straddles all this expertise.”

That company has grown from its handful of founders to more than 70 employees in Mountain View and Boston, and the growth will continue as it brings its new product to market.

Where a few years ago Lightmatter’s product was more of a well-informed twinkle in the eye, now it has taken a more solid form in the Envise, which they call a “general-purpose photonic AI accelerator.” It’s a server unit designed to fit into normal data center racks but equipped with multiple photonic computing units, which can perform neural network inference processes at mind-boggling speeds. (It’s limited to certain types of calculations, namely linear algebra for now, and not complex logic, but this type of math happens to be a major component of machine learning processes.)

Harris was reticent to provide exact numbers on performance improvements, but more because those improvements are increasing than that they’re not impressive enough. The website suggests it’s 5x faster than an Nvidia A100 unit on a large transformer model like BERT, while using about 15% of the energy. That makes the platform doubly attractive to deep-pocketed AI giants like Google and Amazon, which constantly require both more computing power and who pay through the nose for the energy required to use it. Either better performance or lower energy cost would be great — both together is irresistible.

It’s Lightmatter’s initial plan to test these units with its most likely customers by the end of 2021, refining it and bringing it up to production levels so it can be sold widely. But Harris emphasized this was essentially the Model T of their new approach.

“If we’re right, we just invented the next transistor,” he said, and for the purposes of large-scale computing, the claim is not without merit. You’re not going to have a miniature photonic computer in your hand any time soon, but in data centers, where as much as 10% of the world’s power is predicted to go by 2030, “they really have unlimited appetite.”

The color of math

Image Credits: Lightmatter

There are two main ways by which Lightmatter plans to improve the capabilities of its photonic computers. The first, and most insane-sounding, is processing in different colors.

It’s not so wild when you think about how these computers actually work. Transistors, which have been at the heart of computing for decades, use electricity to perform logic operations, opening and closing gates and so on. At a macro scale you can have different frequencies of electricity that can be manipulated like waveforms, but at this smaller scale it doesn’t work like that. You just have one form of currency, electrons, and gates are either open or closed.

In Lightmatter’s devices, however, light passes through waveguides that perform the calculations as it goes, simplifying (in some ways) and speeding up the process. And light, as we all learned in science class, comes in a variety of wavelengths — all of which can be used independently and simultaneously on the same hardware.

The same optical magic that lets a signal sent from a blue laser be processed at the speed of light works for a red or a green laser with minimal modification. And if the light waves don’t interfere with one another, they can travel through the same optical components at the same time without losing any coherence.

Image Credits: Lightmatter

That means that if a Lightmatter chip can do, say, a million calculations a second using a red laser source, adding another color doubles that to two million, adding another makes three — with very little in the way of modification needed. The chief obstacle is getting lasers that are up to the task, Harris said. Being able to take roughly the same hardware and near-instantly double, triple or 20x the performance makes for a nice roadmap.

It also leads to the second challenge the company is working on clearing away, namely interconnect. Any supercomputer is composed of many small individual computers, thousands and thousands of them, working in perfect synchrony. In order for them to do so, they need to communicate constantly to make sure each core knows what other cores are doing, and otherwise coordinate the immensely complex computing problems supercomputing is designed to take on. (Intel talks about this “concurrency” problem building an exa-scale supercomputer here.)

“One of the things we’ve learned along the way is, how do you get these chips to talk to each other when they get to the point where they’re so fast that they’re just sitting there waiting most of the time?” said Harris. The Lightmatter chips are doing work so quickly that they can’t rely on traditional computing cores to coordinate between them.

A photonic problem, it seems, requires a photonic solution: a wafer-scale interconnect board that uses waveguides instead of fiber optics to transfer data between the different cores. Fiber connections aren’t exactly slow, of course, but they aren’t infinitely fast, and the fibers themselves are actually fairly bulky at the scales chips are designed, limiting the number of channels you can have between cores.

“We built the optics, the waveguides, into the chip itself; we can fit 40 waveguides into the space of a single optical fiber,” said Harris. “That means you have way more lanes operating in parallel — it gets you to absurdly high interconnect speeds.” (Chip and server fiends can find that specs here.)

The optical interconnect board is called Passage, and will be part of a future generation of its Envise products — but as with the color calculation, it’s for a future generation. Five-10x performance at a fraction of the power will have to satisfy their potential customers for the present.

Putting that $80M to work

Those customers, initially the “hyper-scale” data handlers that already own data centers and supercomputers that they’re maxing out, will be getting the first test chips later this year. That’s where the B round is primarily going, Harris said: “We’re funding our early access program.”

That means both building hardware to ship (very expensive per unit before economies of scale kick in, not to mention the present difficulties with suppliers) and building the go-to-market team. Servicing, support and the immense amount of software that goes along with something like this — there’s a lot of hiring going on.

The round itself was led by Viking Global Investors, with participation from HP Enterprise, Lockheed Martin, SIP Global Partners, and previous investors GV, Matrix Partners and Spark Capital. It brings their total raised to about $113 million; There was the initial $11 million A round, then GV hopping on with a $22 million A-1, then this $80 million.

Although there are other companies pursuing photonic computing and its potential applications in neural networks especially, Harris didn’t seem to feel that they were nipping at Lightmatter’s heels. Few if any seem close to shipping a product, and at any rate this is a market that is in the middle of its hockey stick moment. He pointed to an OpenAI study indicating that the demand for AI-related computing is increasing far faster than existing technology can provide it, except with ever larger data centers.

The next decade will bring economic and political pressure to rein in that power consumption, just as we’ve seen with the cryptocurrency world, and Lightmatter is poised and ready to provide an efficient, powerful alternative to the usual GPU-based fare.

As Harris suggested hopefully earlier, what his company has made is potentially transformative in the industry, and if so there’s no hurry — if there’s a gold rush, they’ve already staked their claim.

 

EEG-lab

Cognixion’s brain-monitoring headset enables fluid communication for people with severe disabilities

Of the many frustrations of having a severe motor impairment, the difficulty of communicating must surely be among the worst. The tech world has not offered much succor to those affected by things like locked-in syndrome, ALS and severe strokes, but startup Cognixion aims to with a novel form of brain monitoring that, combined with a modern interface, could make speaking and interaction far simpler and faster.

The company’s Cognixion One headset tracks brain activity closely in such a way that the wearer can direct a cursor — reflected on a visor like a heads-up display — in multiple directions, or select from various menus and options. No physical movement is needed, and with the help of modern voice interfaces like Alexa, the user can not only communicate efficiently but freely access all kinds of information and content most people take for granted.

But it’s not a miracle machine, and it isn’t a silver bullet. Here’s how it got started.

Overhauling decades-old brain tech

Everyone with a motor impairment has different needs and capabilities, and there are a variety of assistive technologies that cater to many of these needs. But many of these techs and interfaces are years or decades old — medical equipment that hasn’t been updated for an era of smartphones and high-speed mobile connections.

Some of the most dated interfaces, unfortunately, are those used by people with the most serious limitations: those whose movements are limited to their heads, faces, eyes — or even a single eyelid, like Jean-Dominique Bauby, the famous author of “The Diving Bell and the Butterfly.”

One of the tools in the toolbox is the electroencephalogram, or EEG, which involves detecting activity in the brain via patches on the scalp that record electrical signals. But while they’re useful in medicine and research in many ways, EEGs are noisy and imprecise — more for finding which areas of the brain are active than, say, which sub-region of the sensory cortex or the like. And of course you have to wear a shower cap wired with electrodes (often greasy with conductive gel) — it’s not the kind of thing anyone wants to do for more than an hour, let alone all day every day.

Yet even among those with the most profound physical disabilities, cognition is often unimpaired — as indeed EEG studies have helped demonstrate. It made Andreas Forsland, co-founder and CEO of Cognixion, curious about further possibilities for the venerable technology: “Could a brain-computer interface using EEG be a viable communication system?”

He first used EEG for assistive purposes in a research study some five years ago. They were looking into alternative methods of letting a person control an on-screen cursor, among them an accelerometer for detecting head movements, and tried integrating EEG readings as another signal. But it was far from a breakthrough.

A modern lab with an EEG cap wired to a receiver and laptop — this is an example of how EEG is commonly used. Image Credits: BSIP/Universal Images Group via Getty Images

He ran down the difficulties: “With a read-only system, the way EEG is used today is no good; other headsets have slow sample rates and they’re not accurate enough for a real-time interface. The best BCIs are in a lab, connected to wet electrodes — it’s messy, it’s really a non-starter. So how do we replicate that with dry, passive electrodes? We’re trying to solve some very hard engineering problems here.”

The limitations, Forsland and his colleagues found, were not so much with the EEG itself as with the way it was carried out. This type of brain monitoring is meant for diagnosis and study, not real-time feedback. It would be like taking a tractor to a drag race. Not only do EEGs often work with a slow, thorough check of multiple regions of the brain that may last several seconds, but the signal it produces is analyzed by dated statistical methods. So Cognixion started by questioning both practices.

Improving the speed of the scan is more complicated than overclocking the sensors or something. Activity in the brain must be inferred by collecting a certain amount of data. But that data is collected passively, so Forsland tried bringing an active element into it: a rhythmic electric stimulation that is in a way reflected by the brain region, but changed slightly depending on its state — almost like echolocation.

The Cognixion One headset with its dry EEG terminals visible. Image Credits: Cognixion

They detect these signals with a custom set of six EEG channels in the visual cortex area (up and around the back of your head), and use a machine learning model to interpret the incoming data. Running a convolutional neural network locally on an iPhone — something that wasn’t really possible a couple years ago — the system can not only tease out a signal in short order but make accurate predictions, making for faster and smoother interactions.

The result is sub-second latency with 95-100% accuracy in a wireless headset powered by a mobile phone. “The speed, accuracy and reliability are getting to commercial levels — we can match the best in class of the current paradigm of EEGs,” said Forsland.

Dr. William Goldie, a clinical neurologist who has used and studied EEGs and other brain monitoring techniques for decades (and who has been voluntarily helping Cognixion develop and test the headset), offered a positive evaluation of the technology.

“There’s absolutely evidence that brainwave activity responds to thinking patterns in predictable ways,” he noted. This type of stimulation and response was studied years ago. “It was fascinating, but back then it was sort of in the mystery magic world. Now it’s resurfacing with these special techniques and the computerization we have these days. To me it’s an area that’s opening up in a manner that I think clinically could be dramatically effective.”

BCI, meet UI

The first thing Forsland told me was “We’re a UI company.” And indeed even such a step forward in neural interfaces as he later described means little if it can’t be applied to the problem at hand: helping people with severe motor impairment to express themselves quickly and easily.

Sad to say, it’s not hard to imagine improving on the “competition,” things like puff-and-blow tubes and switches that let users laboriously move a cursor right, right a little more, up, up a little more, then click: a letter! Gaze detection is of course a big improvement over this, but it’s not always an option (eyes don’t always work as well as one would like) and the best eye-tracking solutions (like a Tobii Dynavox tablet) aren’t portable.

Why shouldn’t these interfaces be as modern and fluid as any other? The team set about making a UI with this and the capabilities of their next-generation EEG in mind.

Image Credits: Cognixion

Their solution takes bits from the old paradigm and combines them with modern virtual assistants and a radial design that prioritizes quick responses and common needs. It all runs in an app on an iPhone, the display of which is reflected in a visor, acting as a HUD and outward-facing display.

In easy reach of, not to say a single thought but at least a moment’s concentration or a tilt of the head, are everyday questions and responses — yes, no, thank you, etc. Then there are slots to put prepared speech into — names, menu orders and so on. And then there’s a keyboard with word- and sentence-level prediction that allows common words to be popped in without spelling them out.

“We’ve tested the system with people who rely on switches, who might take 30 minutes to make 2 selections. We put the headset on a person with cerebral palsy, and she typed our her name and hit play in 2 minutes,” Forsland said. “It was ridiculous, everyone was crying.”

Goldie noted that there’s something of a learning curve. “When I put it on, I found that it would recognize patterns and follow through on them, but it also sort of taught patterns to me. You’re training the system, and it’s training you — it’s a feedback loop.”

“I can be the loudest person in the room”

One person who has found it extremely useful is Chris Benedict, a DJ, public speaker and disability advocate who himself has Dyskinetic Cerebral Palsy. It limits his movements and ability to speak, but doesn’t stop him from spinning (digital) records at various engagements, however, or from explaining his experience with the headset over email. (And you can see him demonstrating it in person in the video above.)

Image Credits: Cognixion

“Even though it’s not a tool that I’d need all the time it’s definitely helpful in aiding my communication,” he told me. “Especially when I need to respond quickly or am somewhere that is noisy, which happens often when you are a DJ. If I wear it with a Bluetooth speaker I can be the loudest person in the room.” (He always has a speaker on hand, since “you never know when you might need some music.”)

The benefits offered by the headset give some idea of what is lacking from existing assistive technology (and what many people take for granted).

“I can use it to communicate, but at the same time I can make eye contact with the person I’m talking to, because of the visor. I don’t have to stare at a screen between me and someone else. This really helps me connect with people,” Benedict explained.

“Because it’s a headset I don’t have to worry about getting in and out of places, there is no extra bulk added to my chair that I have to worry about getting damaged in a doorway. The headset is balanced too, so it doesn’t make my head lean back or forward or weigh my neck down,” he continued. “When I set it up to use the first time it had me calibrate, and it measured my personal range of motion so the keyboard and choices fit on the screen specifically for me. It can also be recalibrated at any time, which is important because not every day is my range of motion the same.”

Alexa, which has been extremely helpful to people with a variety of disabilities due to its low cost and wide range of compatible devices, is also part of the Cognixion interface, something Benedict appreciates, having himself adopted the system for smart home and other purposes. “With other systems this isn’t something you can do, or if it is an option, it’s really complicated,” he said.

Next steps

As Benedict demonstrates, there are people for whom a device like Cognixion’s makes a lot of sense, and the hope is it will be embraced as part of the necessarily diverse ecosystem of assistive technology.

Forsland said that the company is working closely with the community, from users to clinical advisors like Goldie and other specialists, like speech therapists, to make the One headset as good as it can be. But the hurdle, as with so many devices in this class, is how to actually put it on people’s heads — financially and logistically speaking.

Cognixion is applying for FDA clearance to get the cost of the headset — which, being powered by a phone, is not as high as it would be with an integrated screen and processor — covered by insurance. But in the meantime the company is working with clinical and corporate labs that are doing neurological and psychological research. Places where you might find an ordinary, cumbersome EEG setup, in other words.

The company has raised funding and is looking for more (hardware development and medical pursuits don’t come cheap), and has also collected a number of grants.

The Cognixion One headset may still be some years away from wider use (the FDA is never in a hurry), but that allows the company time to refine the device and include new advances. Unlike many other assistive devices, for example a switch or joystick, this one is largely software-limited, meaning better algorithms and UI work will significantly improve it. While many wait for companies like Neuralink to create a brain-computer interface for the modern era, Cognixion has already done so for a group of people who have much more to gain from it.

You can learn more about the Cognixion One headset and sign up to receive the latest at its site here.

Sony announces investment and partnership with Discord to bring the chat app to PlayStation

Sony and Discord have announced a partnership that will integrate the latter’s popular gaming-focused chat app with PlayStation’s own built-in social tools. It’s a big move and a fairly surprising one given how recently acquisition talks were in the air — Sony appears to have offered a better deal than Microsoft, taking an undisclosed minority stake in the company ahead of a rumored IPO.

The exact nature of the partnership is not expressed in the brief announcement post. The closest we come to hearing what will actually happen is that the two companies plan to “bring the Discord and PlayStation experiences closer together on console and mobile starting early next year,” which at least is easy enough to imagine.

Discord has partnered with console platforms before, though its deal with Microsoft was not a particularly deep integration. This is almost certainly more than a “friends can see what you’re playing on PS5” and more of a “this is an alternative chat infrastructure for anyone on a Sony system.” Chances are it’ll be a deep, system-wide but clearly Discord-branded option — such as “Start a voice chat with Discord” option when you invite a friend to your game or join theirs.

The timeline of early 2022 also suggests that this is a major product change, probably coinciding with a big platform update on Sony’s long-term PS5 roadmap.

While the new PlayStation is better than the old one when it comes to voice chat, the old one wasn’t great to begin with, and Discord is not just easier to use but something millions of gamers already do use daily. And these days, if a game isn’t an exclusive, being robustly cross-platform is the next best option — so PS5 players being able to seamlessly join and chat with PC players will reduce a pain point there.

Of course Microsoft has its own advantages, running both the Xbox and Windows ecosystems, but it has repeatedly fumbled this opportunity and the acquisition of Discord might have been the missing piece that tied it all together. That bird has flown, of course, and while Microsoft’s acquisition talks reportedly valued Discord at some $10 billion, it seems the growing chat app decided it would rather fly free with an IPO and attempt to become the dominant voice platform everywhere rather than become a prized pet.

Sony has done its part, financially speaking, by taking part in Discord’s recent $100 million H round. The amount they contributed is unknown, but perforce it can’t be more than a small minority stake, given how much the company has taken on and its total valuation.

Facebook is buying the developer behind VR shooter ‘Onward’

After a steady stream of studio acquisitions in late 2019 and early 2020, Facebook has been a little quieter in recent months when its came to bulking up its VR content arm.

Today, the social media giant breaks that stream, announcing their acquisition of Downpour Interactive, the developer of the popular VR first-person shooter Onward. The title, which is available on the company’s Rift and Quest platforms, as well as through Valve’s Steam store, has been among virtual reality’s top sellers in recent years.

Facebook says that the title will continue to be available on non-Facebook VR hardware going forward.

It’s an interesting deal, particularly after the company’s recent attempt to create an ambitious first-person shooter of its own, partnering with Apex Legends developer Respawn Entertainment and dumping millions into a Medal of Honor VR title that was tepidly received among reviewers after its release this past December.

Facebook didn’t share terms of the Downpour deal, though they noted that the entire team will be joining Oculus Studios. In a blog post detailing the deal, Mike Verdu, Facebook’s VP of AR/VR Content, called Onward a “multiplayer masterpiece.”