Google_data_saver

Google is bringing data saver feature to Android TVs

Google said on Tuesday it is bringing a set of new features to Android TVs to improve the experience of users who rely on mobile hotspots to connect their giant devices to the internet. The features, developed by Google’s Next Billion Users team, will be first rolled out to users in India and then in other countries, the company said.

epfl_roboarm

This prosthetic arm combines manual control with machine learning

Prosthetic limbs are getting better every year, but the strength and precision they gain doesn’t always translate to easier or more effective use, as amputees have only a basic level of control over them. One promising avenue being investigated by Swiss researchers is having an AI take over where manual control leaves off.

To visualize the problem, imagine a person with their arm amputated above the elbow controlling a smart prosthetic limb. With sensors placed on their remaining muscles and other signals, they may fairly easily be able to lift their arm and direct it to a position where they can grab an object on a table.

But what happens next? The many muscles and tendons that would have controlled the fingers are gone, and with them the ability to sense exactly how the user wants to flex or extend their artificial digits. If all the user can do is signal a generic “grip” or “release,” that loses a huge amount of what a hand is actually good for.

Here’s where researchers from École polytechnique fédérale de Lausanne (EPFL) take over. Being limited to telling the hand to grip or release isn’t a problem if the hand knows what to do next — sort of like how our natural hands “automatically” find the best grip for an object without our needing to think about it. Robotics researchers have been working on automatic detection of grip methods for a long time, and it’s a perfect match for this situation.

Prosthesis users train a machine learning model by having it observe their muscle signals while attempting various motions and grips as best they can without the actual hand to do it with. With that basic information the robotic hand knows what type of grasp it should be attempting, and by monitoring and maximizing the area of contact with the target object, the hand improvises the best grip for it in real time. It also provides drop resistance, being able to adjust its grip in less than half a second should it start to slip.

The result is that the object is grasped strongly but gently for as long as the user continues gripping it with, essentially, their will. When they’re done with the object, having taken a sip of coffee or moved a piece of fruit from a bowl to a plate, they “release” the object and the system senses this change in their muscles’ signals and does the same.

It’s reminiscent of another approach, by students in Microsoft’s Imagine Cup, in which the arm is equipped with a camera in the palm that gives it feedback on the object and how it ought to grip it.

It’s all still very experimental, and done with a third-party robotic arm and not particularly optimized software. But this “shared control” technique is promising and could very well be foundational to the next generation of smart prostheses. The team’s paper is published in the journal Nature Machine Intelligence.

1f49a

UK police arrest a number of climate activists planning Heathrow drone protest

U.K. police have arrested a number of environmental activists affiliated with a group which announced  last month that it would use drones to try to ground flights at the country’s busiest airport.

The group, which calls itself Heathrow Pause, is protesting against the government decision to green-light a third runway at the airport.

In a press release published today about an operation at Heathrow Airport, London’s Met Police said it has arrested nine people since yesterday in relation to the planned drone protest, which had been due to commence early this morning.

Heathrow Pause suggested it had up to 200 people willing to volunteer to fly toy drones a few feet off the ground within a 5km drone “no fly” zone around the airport — an act that would technically be in breach of U.K. laws on drone flights, although the group said it would only use small drones, flown at head height and not within flight paths. It also clearly communicated its intentions to the police and airport well in advance of the protest.

“Three women and six men aged between their 20s and the 60s have been arrested on suspicion of conspiracy to commit a public nuisance,” the Met Police said today.

“Four of the men and the three women were arrested yesterday, Thursday, 12 September, in Bethnal Green, Haringey and Wandsworth, in response to proposed plans for illegal drone use near Heathrow Airport.

“They were taken into custody at a London police station.”

The statement says a further two men were arrested this morning within the perimeter of Heathrow Airport on suspicion of conspiracy to commit a public nuisance — though it’s not clear whether they are affiliated with Heathrow Pause.

Videos of confirmed members of the group being arrested by police prior to the planned Heathrow Pause action have been circulating on social media.

In an update on its Twitter feed this morning Heathrow Pause says there have been 10 arrests so far.

It also claims to have made one successful flight, and says two earlier drone flight attempts were thwarted by signal jamming technology.

More flights are planned today, it adds.

A spokeswoman for Heathrow told us there has been no disruption to flights so far today.

In a statement the airport said: “Heathrow’s runways and taxiways remain open and fully operational despite attempts to disrupt the airport through the illegal use of drones in protest nearby. We will continue to work with the authorities to carry out dynamic risk assessment programmes and keep our passengers flying safely on their journeys today.”

“We agree with the need for climate change action but illegal protest activity designed with the intention of disrupting thousands of people, is not the answer. The answer to climate change is in constructive engagement and working together to address the issue, something that Heathrow remains strongly committed to do,” it added.

We’ve asked the airport to confirm whether signal jamming counter-drone technology is being used to try to prevent the protest.

The Met Police said a dispersal order under Section 34 of the Anti-social Behaviour, Crime and Policing Act 2014 has been implemented in the area surrounding Heathrow Airport today.

“It will be in place for approximately 48 hours, commencing at 04:30hrs on Friday, 13 September,” it writes. “The order has been implemented to prevent criminal activity which poses a significant safety and security risk to the airport.”

apple-event-2019-september

iOS 13 will be available on September 19

Apple announced in a press release that iOS 13 will be available on September 19. Even if you don’t plan to buy a new iPhone, you’ll be able to get a bunch of new features.

But that’s not all. iOS 13.1 will be available on September 30. Apple had to remove some features of iOS 13.0 at the last minute as they weren’t stable enough, such as Shortcuts automations and the ability to share your ETA in Apple Maps. That’s why iOS 13.1 will be released shortly after iOS 13.

As always, iOS 13 will be available as a free download. If you have an iPhone 6s or later, an iPhone SE or a 7th-generation iPod touch, your device supports iOS 13.

In addition, watchOS 6 will be released on September 19. Unfortunately, Apple will release iPadOS 13 on September 30. And it looks like tvOS 13 will also be released on September 30, according to a separate press release.

Here’s a quick rundown of what’s new in iOS 13. This year, in addition to dark mode, it feels like every single app has been improved with some quality-of-life updates. The Photos app features a brand new gallery view with autoplaying live photos and videos, smart curation and a more immersive design.

This version has a big emphasis on privacy, as well, thanks to a new signup option called “Sign in with Apple” and a bunch of privacy popups for Bluetooth and Wi-Fi consent. Apple Maps now features an impressive Google Street View-like feature called Look Around. It’s only available in a handful of cities, but I recommend… looking around, as everything is in 3D.

Many apps have been updated, such as Reminders with a brand new version, Messages with the ability to set a profile picture shared with your contacts, Mail with better text formatting options, Health with menstrual cycle tracking, Files with desktop-like features, Safari with a new website settings menu, etc. Read more on iOS 13 in my separate preview.

On the iPad front, for the first time Apple is calling iOS for the iPad under a new name — iPadOS. Multitasking has been improved, the Apple Pencil should feel snappier, Safari is now as powerful as Safari on macOS and more.

threecams

Why does the new iPhone 11 Pro have 3 cameras?

On the back of the iPhone 11 Pro can be found three cameras. Why? Because the more light you collect, the better your picture can be. And we pretty much reached the limit of what one camera can do a little while back. Two, three, even a dozen cameras can be put to work creating a single photo — the only limitation is the code that makes them work.

Earlier in today’s announcements, Apple showed the base-level iPhone 11 with two cameras, but it ditched the telephoto for an ultra-wide lens. But the iPhone Pro has the original wide, plus ultrawide and telephoto, its optical options covering an approximate 35mm equivalent of 13mm, 52mm and 26mm.

“With these three cameras you have incredible creative control,” said Apple’s Phil Schiller during the stage presentation. “It is so pro, you’re going to love using it.”

Previously the telephoto lens worked with the wide-angle camera to produce portrait mode effects or take over when the user zooms in a lot. By combining the info from both those cameras, which have a slightly different perspective, the device can determine depth data, allowing it to blur the background past a certain point, among other things.

The ultra-wide lens provides even more information, which should improve the accuracy of portrait mode and other features. One nice thing about a wide angle on a dedicated sensor and camera system is the creators can build in lots of corrections so you don’t get crazy distortion at the corners or center. Fundamentally you’ll still want to back off a bit, because using an ultra-wide lens on a face gives it a weird look.

While we’re all used to the pinch-to-zoom-in gesture, what you’re usually doing when you do that is a digital zoom, just looking closer at the pixels you already have. With an optical zoom, however, you’re switching between different pieces of glass and, in this case, different sensors, getting you closer to the action without degrading the image.

One nice thing about these three lenses is that they’ve been carefully chosen to work together well. You may have noticed that the ultra-wide is 13mm, the wide is twice that at 26mm and the telephoto is twice that at 52mm.

The simple 2x factor makes it easy for users to understand, sure, but it also makes the image-processing math of switching between these lenses easier. And as Schiller mentioned onstage, “we actually pair the three cameras right at the factory, calibrating for focus and color.”

Not only that, but when you’re shooting with the wide camera, it’s sharing information with the other two cameras, so when you switch to them, they’re already focused on the same point, shooting at the same speed and exposure, white balance and so on. That makes switching between them mostly seamless, even while shooting video (just be aware that you will shake the device when you tap it).

Apple’s improvements to the iPhone camera system this year are nowhere near as crazy as the switch from one to two cameras made by much of the industry a couple years back. But a wide, tele and ultra-wide setup is a common one for photographers, and no doubt will prove a useful one for everyone who buys into this rather expensive single-device solution.

Sonos_OneSL_4

Sonos took the mic out of its smart speaker for the $179 Sonos One SL

Sonos has a new entry-level connected speaker that will give you all of its multi-room, high-quality sound — without the onboard microphones and smart assistants of the Sonos One. The microphone-free Sonos One SL retails for 9.99 ( less than the existing Sonos One) and comes with AirPlay 2, delivering good functional upgrades over the Play:1 it replaces.

Visually, you’d be hard-pressed to tell the Sonos One from the Sonos One SL, especially at a distance. It has the exact same dimensions, and the same industrial design, featuring a matte black or white finish and controls on the top. Those controls are the one place you’ll notice an obvious difference, however — the Sonos One has an additional LED, microphone icon and capacitive touch surface above the playback controls for turning on and off the built-in smart assistant and microphone. The Sonos One SL, lacking a mic, has none of these.

Unlike the Play:1, Sonos One SL can stereo pair with a Sonos One, which is a nice feature, because when using two of these in tandem in one room you actually only need one to have a mic for use with Alexa or Google Assistant. Two Sonos One SL speakers will also pair with one another, of course, and with combined savings of $40 versus the Sonos One, these are naturally great candidates for use with the Sonos Beam for a home theater surround setup.

Of course, you can also still use the Sonos One SL in combination with a smart assistant — just like you can with any other Sonos speaker, so you can specify to play music to them via voice control using any other Alexa or Google Assistant-enabled device.

The $179 Sonos One SL is now the least expensive offering in Sonos’ own lineup — but the $149 Sonos x Ikea bookshelf speaker is the lowest-price Sonos-compatible offering overall. They’re a lot closer than you might think in terms of quality and other factors that would contribute to a buying decision, but the Sonos One probably has a slight edge in sound, where the Ikea bookshelf speaker is a bit more versatile in terms of mounting and installation options.

Sonos One SL is up for pre-order now, and will be shipping as of September 12.

Apple could add in-screen fingerprint reader to 2020 iPhone

According to a new report from Bloomberg, Apple has been working on in-screen fingerprint readers. But that feature won’t be ready for the new iPhone that will be announced next week. It could be released in 2020, or maybe 2021 if Apple’s suppliers can’t meet deadlines.

If you’ve played with the most recent smartphones from Samsung, Huawei and other Android manufacturers, you know that in-screen fingerprint readers already work quite well. When you unlock your phone, you can see a fingerprint icon on the screen. It then works just like any fingerprint reader — you put your finger on the icon and it unlocks your phone.

In 2017, Apple introduced Face ID for the iPhone X as a replacement to Touch ID, its fingerprint technology. But it sounds like the company now wants to give users multiple options by re-adding Touch ID to its smartphones.

All 2018 iPhone models as well as the most recent iPad Pro models now all work with Face ID. But you can still buy some Touch ID devices, such as the iPad Air or the MacBook Pro. The fingerprint readers are integrated in a separate button.

Bloomberg also confirms a Nikkei report about a future iPhone SE. Apple could launch a new low-cost iPhone SE.

Despite the name, it would be based on the iPhone 8 design instead of the previous iPhone SE design. It would feature the same 4.7-inch display that you can find on the iPhone 6, iPhone 6S, iPhone 7 and iPhone 8.