manipulathor

Deep Science: Robots, meet world

Research papers come out far too frequently for anyone to read them all. That’s especially true in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

This edition, we have a lot of items concerned with the interface between AI or robotics and the real world. Of course most applications of this type of technology have real-world applications, but specifically this research is about the inevitable difficulties that occur due to limitations on either side of the real-virtual divide.

One issue that constantly comes up in robotics is how slow things actually go in the real world. Naturally some robots trained on certain tasks can do them with superhuman speed and agility, but for most that’s not the case. They need to check their observations against their virtual model of the world so frequently that tasks like picking up an item and putting it down can take minutes.

What’s especially frustrating about this is that the real world is the best place to train robots, since ultimately they’ll be operating in it. One approach to addressing this is by increasing the value of every hour of real-world testing you do, which is the goal of this project over at Google.

In a rather technical blog post the team describes the challenge of using and integrating data from multiple robots learning and performing multiple tasks. It’s complicated, but they talk about creating a unified process for assigning and evaluating tasks, and adjusting future assignments and evaluations based on that. More intuitively, they create a process by which success at task A improves the robots’ ability to do task B, even if they’re different.

Humans do it — knowing how to throw a ball well gives you a head start on throwing a dart, for instance. Making the most of valuable real-world training is important, and this shows there’s lots more optimization to do there.

Another approach is to improve the quality of simulations so they’re closer to what a robot will encounter when it takes its knowledge to the real world. That’s the goal of the Allen Institute for AI’s THOR training environment and its newest denizen, ManipulaTHOR.

Image Credits: Allen Institute

Simulators like THOR provide an analogue to the real world where an AI can learn basic knowledge like how to navigate a room to find a specific object — a surprisingly difficult task! Simulators balance the need for realism with the computational cost of providing it, and the result is a system where a robot agent can spend thousands of virtual “hours” trying things over and over with no need to plug them in, oil their joints and so on.

mokugo-team

Liquid Instruments raises $13.7M to bring its education-focused 8-in-1 engineering gadget to market

Part of learning to be an engineer is understanding the tools you’ll have to work with — voltmeters, spectrum analyzers, things like that. But why use two, or eight for that matter, where one will do? The Moku:Go combines several commonly used tools into one compact package, saving room on your workbench or classroom while also providing a modern, software-configurable interface. Creator Liquid Instruments has just raised $13.7 million to bring this gadget to students and engineers everywhere.

Image Credits: Liquid Instruments

The idea behind Moku:Go is largely the same as the company’s previous product, the Moku:Lab. Using a standard input port, a set of FPGA-based tools perform the same kind of breakdowns and analyses of electrical signals as you would get in a larger or analog device. But being digital saves a lot of space that would normally go toward bulky analog components.

The Go takes this miniaturization further than the Lab, doing many of the same tasks at half the weight and with a few useful extra features. It’s intended for use in education or smaller engineering shops where space is at a premium. Combining eight tools into one is a major coup when your bench is also your desk and your file cabinet.

Those eight tools, by the way, are: waveform generator, arbitrary waveform generator, frequency response analyzer, logic analyzer/pattern generator, oscilloscope/voltmeter, PID controller, spectrum analyzer and data logger. It’s hard to say whether that really adds up to more or less than eight, but it’s definitely a lot to have in a package the size of a hardback book.

You access and configure them using a software interface rather than a bunch of knobs and dials — though let’s be clear, there are good arguments for both. When you’re teaching a bunch of young digital natives, however, a clean point-and-click interface is probably a plus. The UI is actually very attractive; you can see several examples by clicking the instruments on this page, but here’s an example of the waveform generator:

Image Credits: Liquid Instruments

Love those pastels.

The Moku:Go currently works with Macs and Windows but doesn’t have a mobile app yet. It integrates with Python, MATLAB and LabVIEW. Data goes over Wi-Fi.

Compared with the Moku:Lab, it has a few perks. A USB-C port instead of a mini, a magnetic power port, a 16-channel digital I/O, optional power supply of up to four channels and of course it’s half the size and weight. It compromises on a few things — no SD card slot and less bandwidth for its outputs, but if you need the range and precision of the more expensive tool, you probably need a lot of other stuff too.

Image Credits: Liquid Instruments

Since the smaller option also costs $500 to start (“a price comparable to a textbook”… yikes) compared with the big one’s $3,500, there’s major savings involved. And it’s definitely cheaper than buying all those instruments individually.

The Moku:Go is “targeted squarely at university education,” said Liquid Instruments VP of marketing Doug Phillips. “Professors are able to employ the device in the classroom and individuals, such as students and electronic engineering hobbyists, can experiment with it on their own time. Since its launch in March, the most common customer profile has been students purchasing the device at the direction of their university.”

About a hundred professors have signed on to use the device as part of their fall classes, and the company is working with other partners in universities around the world. “There is a real demand for portable, flexible systems that can handle the breadth of four years of curriculum,” Phillips said.

Production starts in June (samples are out to testers), the rigors and costs of which likely prompted the recent round of funding. The $13.7 million comes from existing investors Anzu Partners and ANU Connect Ventures, and new investors F1 Solutions and Moelis Australia’s Growth Capital Fund. It’s a convertible note “in advance of an anticipated Series B round in 2022,” Phillips said. It’s a larger amount than they intended to raise at first, and the note nature of the round is also not standard, but given the difficulties faced by hardware companies over the last year, some irregularities are probably to be expected.

No doubt the expected B round will depend considerably on the success of the Moku:Go’s launch and adoption. But this promising product looks as if it might be a commonplace item in thousands of classrooms a couple years from now.

apple_gaad21_new-memoji_05192021_big.jpg.large_2x

Apple Watch gets a motion-controlled cursor with ‘Assistive Touch’

Tapping the tiny screen of the Apple Watch with precision has a certain level of fundamental difficulty, but for some people with disabilities it’s genuinely impossible. Apple has remedied this with a new mode called “Assistive Touch” that detects hand gestures to control a cursor and navigate that way.

The feature was announced as part of a collection of accessibility-focused additions across its products, but Assistive Touch seems like the one most likely to make a splash across the company’s user base.

It relies on the built-in gyroscope and accelerometer, as well as data from the heart rate sensor, to deduce the position of the wrist and hand. Don’t expect it to tell a peace sign from a metal sign just yet, but for now it detects “pinch” (touching the index finger to the thumb) and “clench” (make a loose fist), which can act as basic “next” and “confirm” actions. Incoming calls, for instance, can be quickly accepted with a clench.

Most impressive, however, is the motion pointer. You can activate it either by selecting it in the Assistive Touch menu, or by shaking your wrist vigorously. It then detects the position of your hand as you move it around, allowing you to “swipe” by letting the cursor linger at the edge of the screen, or interact with things using a pinch or clench.

Needless to say this could be extremely helpful for anyone who only has the one hand available for interacting with the watch. And even for those who don’t strictly need it, the ability to keep one hand on the exercise machine, cane or whatever else while doing smartwatch things is surely an attractive possibility. (One wonders about the potential of this control method as a cursor for other platforms as well…)

Image Credits: Apple

Assistive Touch is only one of many accessibility updates Apple shared in this news release; other advances for the company’s platforms include:

  • SignTime, an ASL interpreter video call for Apple Store visits and support
  • Support for new hearing aids
  • Improved VoiceOver-based exploration of images
  • A built-in background noise generator (which I fully intend to use)
  • Replacement of certain buttons with non-verbal mouth noises (for people who have limited speech and mobility)
  • Memoji customizations for people with oxygen tubes, cochlear implants and soft helmets
  • Featured media in the App Store, Apple TV, Books and Maps apps from or geared toward people with disabilities

It’s all clustered around Global Accessibility Awareness Day, which is tomorrow, May 20th.

material-you-google-io-2021

Everything Google announced at I/O today

This year’s I/O event from Google was heavy on the “we’re building something cool” and light on the “here’s something you can use or buy tomorrow.” But there were also some interesting surprises from the semi-live event held in and around the company’s Mountain View campus. Read on for all the interesting bits.

Xbox teams up with Tencent’s Honor of Kings maker TiMi Studios

TiMi Studios, one of the world’s most lucrative game makers and part of Tencent’s gargantuan digital entertainment empire, said Thursday that it has struck a strategic partnership with Xbox.

The succinct announcement did not mention whether the tie-up is for content development or Xbox’s console distribution in China, but said more details will be unveiled for the “deep partnership” by the end of this year.

Established in 2008 within Tencent, TiMi is behind popular mobile titles such as Honor of Kings and Call of Duty Mobile. In 2020, Honor of Kings alone generated close to $2.5 billion in player spending, according to market research company SensorTower. In all, TiMi pocketed $10 billion in revenue last year, according to a report from Reuters citing people with knowledge.

The partnership could help TiMi build a name globally by converting its mobile titles into console plays for Microsoft’s Xbox. TiMi has been trying to strengthen its own brand and distinguish itself from other Tencent gaming clusters, such as its internal rival LightSpeed & Quantum Studio, which is known for PUBG Mobile.

TiMi operates a branch in Los Angeles and said in January 2020 that it planned to “triple” its headcount in North America, adding that building high-budget, high-quality AAA mobile games was core to its global strategy. There are clues in a recruitment notice posted recently by a TiMi employee: The unit is hiring developers for an upcoming AAA title that is benchmarked against the Oasis, a massively multiplayer online game that evolves into a virtual society in the fiction and film Ready Player One. Oasis is played via a virtual reality headset.

Xbox’s latest Series X and Series S are to debut in China imminently, though the launch doesn’t appear to be linked to the Tencent deal. Sony’s PlayStation 5 just hit the shelves in China in late April. Nintendo Switch distributes in China through a partnership with Tencent sealed in 2019.

Chinese console players often resort to grey markets for foreign editions because the list of Chinese titles approved by local authorities is tiny compared to what’s available outside the country. But these grey markets, both online and offline, are susceptible to ongoing clampdown. Most recently in March, product listings by multiple top sellers of imported console games vanished from Alibaba’s Taobao marketplace.

20201002_154937

Alba Orbital’s mission to image the Earth every 15 minutes brings in $3.4M seed round

Orbital imagery is in demand, and if you think having daily images of everywhere on Earth is going to be enough in a few years, you need a lesson in ambition. Alba Orbital is here to provide it with its intention to provide Earth observation at intervals of 15 minutes rather than hours or days — and it just raised $3.4 million to get its next set of satellites into orbit.

Alba attracted our attention at Y Combinator’s latest demo day; I was impressed with the startup’s accomplishment of already having six satellites in orbit, which is more than most companies with space ambition ever get. But it’s only the start for the company, which will need hundreds more to begin to offer its planned high-frequency imagery.

The Scottish company has spent the last few years in prep and R&D, pursuing the goal, which some must have thought laughable, of creating a solar-powered Earth observation satellite that weighs in at less than one kilogram. The joke’s on the skeptics, however — Alba has launched a proof of concept and is ready to send the real thing up as well.

Little more than a flying camera with a minimum of storage, communication, power and movement, the sub-kilogram Unicorn-2 is about the size of a soda can, with paperback-size solar panel wings, and costs in the neighborhood of $10,000. It should be able to capture up to 10-meter resolution, good enough to see things like buildings, ships, crops, even planes.

Image Credits: Alba Orbital

“People thought we were idiots. Now they’re taking it seriously,” said Tom Walkinshaw, founder and CEO of Alba. “They can see it for what it is: a unique platform for capturing data sets.”

Indeed, although the idea of daily orbital imagery like Planet’s once seemed excessive, in some situations it’s quite clearly not enough.

“The California case is probably wildfires,” said Walkinshaw (and it always helps to have a California case). “Having an image once a day of a wildfire is a bit like having a chocolate teapot… not very useful. And natural disasters like hurricanes, flooding is a big one, transportation as well.”

Walkinshaw noted that they company was bootstrapped and profitable before taking on the task of launching dozens more satellites, something the seed round will enable.

“It gets these birds in the air, gets them finished and shipped out,” he said. “Then we just need to crank up the production rate.”

Image Credits: Alba Orbital

When I talked to Walkinshaw via video call, 10 or so completed satellites in their launch shells were sitting on a rack behind him in the clean room, and more are in the process of assembly. Aiding in the scaling effort is new investor James Park, founder and CEO of Fitbit — definitely someone who knows a little bit about bringing hardware to market.

Interestingly, the next batch to go to orbit (perhaps as soon as in a month or two, depending on the machinations of the launch provider) will be focusing on nighttime imagery, an area Walkinshaw suggested was undervalued. But as orbital thermal imaging startup Satellite Vu has shown, there’s immense appetite for things like energy and activity monitoring, and nighttime observation is a big part of that.

The seed round will get the next few rounds of satellites into space, and after that Alba will be working on scaling manufacturing to produce hundreds more. Once those start going up it can demonstrate the high-cadence imaging it is aiming to produce — for now it’s impossible to do so, though Alba already has customers lined up to buy the imagery it does get.

The round was led by Metaplanet Holdings, with participation by Y Combinator, Liquid2, Soma, Uncommon Denominator, Zillionize and numerous angels.

As for competition, Walkinshaw welcomes it, but feels secure that he and his company have more time and work invested in this class of satellite than anyone in the world — a major obstacle for anyone who wants to do battle. It’s more likely companies will, as Alba has done, pursue a distinct product complementary to those already or in the process of being offered.

“Space is a good place to be right now,” he concluded.

CMU researchers show potential of privacy-preserving activity tracking using radar

Imagine if you could settle/rekindle domestic arguments by asking your smart speaker when the room last got cleaned or whether the bins already got taken out?

Or — for an altogether healthier use-case — what if you could ask your speaker to keep count of reps as you do squats and bench presses? Or switch into full-on ‘personal trainer’ mode — barking orders to peddle faster as you spin cycles on a dusty old exercise bike (who needs a Peloton!).

And what if the speaker was smart enough to just know you’re eating dinner and took care of slipping on a little mood music?

Now imagine if all those activity tracking smarts were on tap without any connected cameras being plugged inside your home.

Another bit of fascinating research from researchers at Carnegie Mellon University’s Future Interfaces Group opens up these sorts of possibilities — demonstrating a novel approach to activity tracking that does not rely on cameras as the sensing tool. 

Installing connected cameras inside your home is of course a horrible privacy risk. Which is why the CMU researchers set about investigating the potential of using millimeter wave (mmWave) doppler radar as a medium for detecting different types of human activity.

The challenge they needed to overcome is that while mmWave offers a “signal richness approaching that of microphones and cameras”, as they put it, data-sets to train AI models to recognize different human activities as RF noise are not readily available (as visual data for training other types of AI models is).

Not to be deterred, they set about sythensizing doppler data to feed a human activity tracking model — devising a software pipeline for training privacy-preserving activity tracking AI models. 

The results can be seen in this video — where the model is shown correctly identifying a number of different activities, including cycling, clapping, waving and squats. Purely from its ability to interpret the mmWave signal the movements generate — and purely having been trained on public video data. 

“We show how this cross-domain translation can be successful through a series of experimental results,” they write. “Overall, we believe our approach is an important stepping stone towards significantly reducing the burden of training such as human sensing systems, and could help bootstrap uses in human-computer interaction.”

Researcher Chris Harrison confirms the mmWave doppler radar-based sensing doesn’t work for “very subtle stuff” (like spotting different facial expressions). But he says it’s sensitive enough to detect less vigorous activity — like eating or reading a book.

The motion detection ability of doppler radar is also limited by a need for line-of-sight between the subject and the sensing hardware. (Aka: “It can’t reach around corners yet.” Which, for those concerned about future robots’ powers of human detection, will surely sound slightly reassuring.)

Detection does require special sensing hardware, of course. But things are already moving on that front: Google has been dipping its toe in already, via project Soli — adding a radar sensor to the Pixel 4, for example.

Google’s Nest Hub also integrates the same radar sense to track sleep quality.

“One of the reasons we haven’t seen more adoption of radar sensors in phones is a lack of compelling use cases (sort of a chicken and egg problem),” Harris tells TechCrunch. “Our research into radar-based activity detection helps to open more applications (e.g., smarter Siris, who know when you are eating, or making dinner, or cleaning, or working out, etc.).”

Asked whether he sees greater potential in mobile or fixed applications, Harris reckons there are interesting use-cases for both.

“I see use cases in both mobile and non mobile,” he says. “Returning to the Nest Hub… the sensor is already in the room, so why not use that to bootstrap more advanced functionality in a Google smart speaker (like rep counting your exercises).

“There are a bunch of radar sensors already used in building to detect occupancy (but now they can detect the last time the room was cleaned, for example).”

“Overall, the cost of these sensors is going to drop to a few dollars very soon (some on eBay are already around $1), so you can include them in everything,” he adds. “And as Google is showing with a product that goes in your bedroom, the threat of a ‘surveillance society’ is much less worry-some than with camera sensors.”

Startups like VergeSense are already using sensor hardware and computer vision technology to power real-time analytics of indoor space and activity for the b2b market (such as measuring office occupancy).

But even with local processing of low-resolution image data, there could still be a perception of privacy risk around the use of vision sensors — certainly in consumer environments.

Radar offers an alternative to such visual surveillance that could be a better fit for privacy-risking consumer connected devices such as ‘smart mirrors‘.

“If it is processed locally, would you put a camera in your bedroom? Bathroom? Maybe I’m prudish but I wouldn’t personally,” says Harris.

He also points to earlier research which he says underlines the value of incorporating more types of sensing hardware: “The more sensors, the longer tail of interesting applications you can support. Cameras can’t capture everything, nor do they work in the dark.”

“Cameras are pretty cheap these days, so hard to compete there, even if radar is a bit cheaper. I do believe the strongest advantage is privacy preservation,” he adds.

Of course having any sensing hardware — visual or otherwise — raises potential privacy issues.

A sensor that tells you when a child’s bedroom is occupied may be good or bad depending on who has access to the data, for example. And all sorts of human activity can generate sensitive information, depending on what’s going on. (I mean, do you really want your smart speaker to know when you’re having sex?)

So while radar-based tracking may be less invasive than some other types of sensors it doesn’t mean there are no potential privacy concerns at all.

As ever, it depends on where and how the sensing hardware is being used. Albeit, it’s hard to argue that the data radar generates is likely to be less sensitive than equivalent visual data were it to be exposed via a breach.

“Any sensor should naturally raise the question of privacy — it is a spectrum rather than a yes/no question,” agrees Harris.  “Radar sensors happen to be usually rich in detail, but highly anonymizing, unlike cameras. If your doppler radar data leaked online, it’d be hard to be embarrassed about it. No one would recognize you. If cameras from inside your house leaked online, well… ”

What about the compute costs of synthesizing the training data, given the lack of immediately available doppler signal data?

“It isn’t turnkey, but there are many large video corpuses to pull from (including things like Youtube-8M),” he says. “It is orders of magnitude faster to download video data and create synthetic radar data than having to recruit people to come into your lab to capture motion data.

“One is inherently 1 hour spent for 1 hour of quality data. Whereas you can download hundreds of hours of footage pretty easily from many excellently curated video databases these days. For every hour of video, it takes us about 2 hours to process, but that is just on one desktop machine we have here in the lab. The key is that you can parallelize this, using Amazon AWS or equivalent, and process 100 videos at once, so the throughput can be extremely high.”

And while RF signal does reflect, and do so to different degrees off of different surfaces (aka “multi-path interference”), Harris says the signal reflected by the user “is by far the dominant signal”. Which means they didn’t need to model other reflections in order to get their demo model working. (Though he notes that could be done to further hone capabilities “by extracting big surfaces like walls/ceiling/floor/furniture with computer vision and adding that into the synthesis stage”.)

“The [doppler] signal is actually very high level and abstract, and so it’s not particularly hard to process in real time (much less ‘pixels’ than a camera).” he adds. “Embedded processors in cars use radar data for things like collision breaking and blind spot monitoring, and those are low end CPUs (no deep learning or anything).”

The research is being presented at the ACM CHI conference, alongside another Group project — called Pose-on-the-Go — which uses smartphone sensors to approximate the user’s full-body pose without the need for wearable sensors.

CMU researchers from the Group have also previously demonstrated a method for indoor ‘smart home’ sensing on the cheap (also without the need for cameras), as well as — last year — showing how smartphone cameras could be used to give an on-device AI assistant more contextual savvy.

In recent years they’ve also investigated using laser vibrometry and electromagnetic noise to give smart devices better environmental awareness and contextual functionality. Other interesting research out of the Group includes using conductive spray paint to turn anything into a touchscreen. And various methods to extend the interactive potential of wearables — such as by using lasers to project virtual buttons onto the arm of a device user or incorporating another wearable (a ring) into the mix.

The future of human computer interaction looks certain to be a lot more contextually savvy — even if current-gen ‘smart’ devices can still stumble on the basics and seem more than a little dumb.

 

gameboard-2

The Last Gameboard raises $4M to ship its digital tabletop gaming platform

The tabletop gaming industry has exploded over the last few years as millions discovered or rediscovered its joys, but it too is evolving — and The Last Gameboard hopes to be the venue for that evolution. The digital tabletop platform has progressed from crowdfunding to $4 million seed round, and having partnered with some of the biggest names in the industry, plans to ship by the end of the year.

As the company’s CEO and co-founder Shail Mehta explained in a TC Early Stage pitch-off earlier this year, The Last Gameboard is a 16-inch square touchscreen device with a custom OS and a sophisticated method of tracking game pieces and hand movements. The idea is to provide a digital alternative to physical games where that’s practical, and do so with the maximum benefit and minimum compromise.

If the pitch sounds familiar… it’s been attempted once or twice before. I distinctly remember being impressed by the possibilities of D&D on an original Microsoft Surface… back in 2009. And I played with another at PAX many years ago. Mehta said that until very recently there simply wasn’t the technology and the market wasn’t ready.

“People tried this before, but it was either way too expensive or they didn’t have the audience. And the tech just wasn’t there; they were missing that interaction piece,” she explained, and certainly any player will recognize that the, say, iPad version of a game definitely lacks physicality. The advance her company has achieved is in making the touchscreen able to detect not just taps and drags, but game pieces, gestures and movements above the screen, and more.

“What Gameboard does, no other existing touchscreen or tablet on the market can do — it’s not even close,” Mehta said. “We have unlimited touch, game pieces, passive and active… you can use your chess set at home, lift up and put down the pieces, we track it the whole time. We can do unique identifiers with tags and custom shapes. It’s the next step in how interactive surfaces can be.”

It’s accomplished via a not particularly exotic method, which saves the Gameboard from the fate of the Surface and its successors, which cost several thousand dollars due to their unique and expensive makeups. Mehta explained that they work strictly with ordinary capacitive touch data, albeit at a higher framerate than is commonly used, and then use machine learning to characterize and track object outlines. “We haven’t created a completely new mechanism, we’re just optimizing what’s available today,” she said.

Image Credits: The Last Gameboard

At $699 for the Gameboard it’s not exactly an impulse buy, either, but the fact of the matter is people spend a lot of money on gaming, with some titles running into multiple hundreds of dollars for all the expansions and pieces. Tabletop is now a more than $20 billion industry. If the experience is as good as they hope to make it, this is an investment many a player will not hesitate (much, anyway) to make.

Of course, the most robust set of gestures and features won’t matter if all they had on the platform were bargain-bin titles and grandpa’s-parlor favorites like “Parcheesi.” Fortunately, The Last Gameboard has managed to stack up some of the most popular tabletop companies out there, and aims to have the definitive digital edition for their games.

Asmodee Digital is probably the biggest catch, having adapted many of today’s biggest hits, from modern classics “Catan” and “Carcassonne” to crowdfunded breakout hit “Scythe” and immense dungeon-crawler “Gloomhaven.” The full list of partners right now includes Dire Wolf Digital, Nomad Games, Auroch Digital, Restoration Games, Steve Jackson Games, Knights of Unity, Skyship Studios, EncounterPlus, PlannarAlly and Sugar Gamers, as well as individual creators and developers.

Image Credits: The Last Gameboard

These games may be best played in person, but have successfully transitioned to digital versions, and one imagines that a larger screen and inclusion of real pieces could make for an improved hybrid experience. There will be options both to purchase games individually, like you might on mobile or Steam, or to subscribe to an unlimited access model (pricing to be determined on both).

It would also be something that the many gaming shops and playing venues might want to have a couple of on hand. Testing out a game in-store and then buying a few to stock, or convincing consumers to do the same, could be a great sales tactic for all involved.

In addition to providing a unique and superior digital version of a game, the device can connect with others to trade moves, send game invites and all that sort of thing. The whole OS, Mehta said, “is alive and real. If we didn’t own it and create it, this wouldn’t work.” This is more than a skin on top of Android with a built-in store, but there’s enough shared that Android-based ports will be able to be brought over with little fuss.

Head of content Lee Allentuck suggested that the last couple years (including the pandemic) have started to change game developers’ and publishers’ minds about the readiness of the industry for what’s next. “They see the digital crossover is going to happen — people are playing online board games now. If you can be part of that new trend at the very beginning, it gives you a big opportunity,” he said.

CEO Shail Mehta (center) plays Stop Thief on the Gameboard with others on the team. Image Credits: The Last Gameboard

Allentuck, who previously worked at Hasbro, said there’s widespread interest in the toy and tabletop industry to be more tech-forward, but there’s been a “chicken and egg scenario,” where there’s no market because no one innovates, and no one innovates because there’s no market. Fortunately things have progressed to the point where a company like The Last Gameboard can raise $4 million to help cover the cost of creating that market.

The round was led by TheVentureCity, with participation from SOSV, Riot Games, Conscience VC, Corner3 VC and others. While the company didn’t go to HAX’s Shenzhen program as planned, they are still HAX-affiliated. SOSV partner Garrett Winther gave a glowing recommendation of its approach: “They are the first to effectively tie collaborative physical and digital gameplay together while not losing the community, storytelling or competitive foundations that we all look for in gaming.”

Mehta noted that the pandemic nearly cooked the company by derailing their funding, which was originally supposed to come through around this time last year when everything went pear-shaped. “We had our functioning prototype, we had filed for a patent, we got the traction, we were gonna raise, everything was great… and then COVID hit,” she recalled. “But we got a lot of time to do R&D, which was actually kind of a blessing. Our team was super small so we didn’t have to lay anyone off — we just went into survival mode for like six months and optimized, developed the platform. 2020 was rough for everyone, but we were able to focus on the core product.”

Now the company is poised to start its beta program over the summer and (following feedback from that) ship its first production units before the holiday season when purchases like this one seem to make a lot of sense.

(This article originally referred to this raise as The Last Gameboard’s round A — it’s actually the seed. This has been updated.)

Someone already turned Apple’s AirTag into a slim, wallet-friendly card

Apple’s new AirTag item trackers are pretty small, but not quite small enough to slip into most wallets without adding an obvious bit of bulk.

Fortunately, as one talented AirTag owner has found, that’s nothing you can’t fix with a heat gun, a bit of soldering and an understanding that you could totally fry your shiny new AirTag in the blink of an eye. Oh, and a 3D printer.

When Andrew Ngai realized that much of AirTag’s thickness came from its PCB and its battery being stacked atop each other, he set out to instead arrange them side-by-side. With the help of some iFixit guides (which, by the way, provide an awesome peek inside the AirTag if you’re curious what’s in there but aren’t looking to dissect one yourself), Andrew tore the AirTag down to its key components. After making sure everything still worked in its freshly disassembled state, he 3D printed a new case, soldered in wires to connect the board to the battery at a distance, and put everything back together. Success! And he did it all within just days of AirTag being released.

While this sort of project requires a pretty broad set of skills to pull off, Andrew has kindly handled one of the steps for anyone looking to take it on: he’s uploaded the STL file for the 3D-printed card holder as a free download on Thingiverse(Or you could, of course, just buy a Tile Slim. But that doesn’t involve soldering irons and 3D printing, so where’s the fun in that?)

[via 9to5mac]