Google’s Glass wearable could soon be able to recognize faces of those around the wearer, thanks to a dedicated service for human and object recognition that could be built into third-party apps. The handiwork of Lambda Labs, the special Glass facial recognition API will integrate into software and services using Google’s Mirror API for Glass, crunching shots from the camera and spitting out the identity of people and objects it recognizes. Lambda Labs expects the system to be used for real-world social networking and person-location services, though also warns that it could eventually fall foul of impending privacy regulation.
Lambda’s service has been in operation – though not in Glass-specific form – for some time, and is already used by around 1,000 developers, according to the company. It works by using a pre-existing “album” of known faces or objects, for instance your work colleagues, against which new captures from the camera are compared.
What the system can’t do, right now at least, is compare those around you to images not in its own album. So, you couldn’t walk into a room and have Glass flag up those you might be friends with on Google+ based on the publicly-uploaded photos they’ve shared. It’s also not a real-time process: images have to be passed over to Lambda’s engine via the Mirror API, and the results then fed back in the opposite way.
That’s going to involve a delay of around a few seconds, the company told TechCrunch. It’s a similar system to what we saw MedRef for Glass, an app intending to make calling up patient records more straightforward for doctors and hospital staff, use, and indeed Lambda Labs’ API could be integrated server-side for future versions of MedRef or apps like it.
Despite the fact that, even with functionality like this, Glass wearers won’t be able to roam the streets having names and personal details of those around them hovering in the air like SIMS icons, the facial identification system leads Google’s headset into even murkier privacy issues. Earlier this month, a concerned US Congressional committee fired off a list of privacy-related questions to Google CEO Larry Page, demanding reassurance by June 14 that the wearable wouldn’t collect personal data without the consent of non-users, wouldn’t be unduly intrusive in ways smartphones are not currently, and how it might be updated and its functionality extended in future.
Currently, Glass lacks native face-recognition, hence the opening for third-party services like Lambda Labs’ to step in. Google’s own stance has been that it would require “strong privacy protections” be in place before it would consider adding the functionality itself; exactly what protections would be considered sufficiently “safe” for the public is unclear.
Members of Google’s Glass team touched on the potential for privacy infringement during the fireside chat about the wearable at Google I/O earlier this month. Among the factors built in to avoid any misuse of the camera is an SDK-level requirement that the camera be active if the headset is recording, Glass engineer Charles Mendis revealed; there’s also, product director Steve Lee pointed out, “a clear social gesture” involved in triggering that recording, whether it be physically pressing the button on the upper side of the eyepiece, or giving the “OK Glass, take a photo” spoken command.
Nonetheless, it’s a young segment of the industry and the rules are likely to be fluid as the “what we could do” urge for progress bumps up against “what we should do” restraint. Parallel developments in Google+ are leading Glass down the life-logging path, giving room – and the organizational tools – to store every moment that goes on around you, even if the hardware and software aren’t quite set up that way today.
Google’s Glass wearable could soon be able to recognize faces of those around the wearer, thanks to a dedicated service for human and object recognition that could be built into third-party apps. The handiwork of Lambda Labs, the special Glass facial recognition API will integrate into software and services using Google’s Mirror API for Glass, crunching shots from the camera and spitting out the identity of people and objects it recognizes. Lambda Labs expects the system to be used for real-world social networking and person-location services, though also warns that it could eventually fall foul of impending privacy regulation.
Lambda’s service has been in operation – though not in Glass-specific form – for some time, and is already used by around 1,000 developers, according to the company. It works by using a pre-existing “album” of known faces or objects, for instance your work colleagues, against which new captures from the camera are compared.
What the system can’t do, right now at least, is compare those around you to images not in its own album. So, you couldn’t walk into a room and have Glass flag up those you might be friends with on Google+ based on the publicly-uploaded photos they’ve shared. It’s also not a real-time process: images have to be passed over to Lambda’s engine via the Mirror API, and the results then fed back in the opposite way.
That’s going to involve a delay of around a few seconds, the company told TechCrunch. It’s a similar system to what we saw MedRef for Glass, an app intending to make calling up patient records more straightforward for doctors and hospital staff, use, and indeed Lambda Labs’ API could be integrated server-side for future versions of MedRef or apps like it.
Despite the fact that, even with functionality like this, Glass wearers won’t be able to roam the streets having names and personal details of those around them hovering in the air like SIMS icons, the facial identification system leads Google’s headset into even murkier privacy issues. Earlier this month, a concerned US Congressional committee fired off a list of privacy-related questions to Google CEO Larry Page, demanding reassurance by June 14 that the wearable wouldn’t collect personal data without the consent of non-users, wouldn’t be unduly intrusive in ways smartphones are not currently, and how it might be updated and its functionality extended in future.
Currently, Glass lacks native face-recognition, hence the opening for third-party services like Lambda Labs’ to step in. Google’s own stance has been that it would require “strong privacy protections” be in place before it would consider adding the functionality itself; exactly what protections would be considered sufficiently “safe” for the public is unclear.
Members of Google’s Glass team touched on the potential for privacy infringement during the fireside chat about the wearable at Google I/O earlier this month. Among the factors built in to avoid any misuse of the camera is an SDK-level requirement that the camera be active if the headset is recording, Glass engineer Charles Mendis revealed; there’s also, product director Steve Lee pointed out, “a clear social gesture” involved in triggering that recording, whether it be physically pressing the button on the upper side of the eyepiece, or giving the “OK Glass, take a photo” spoken command.
Nonetheless, it’s a young segment of the industry and the rules are likely to be fluid as the “what we could do” urge for progress bumps up against “what we should do” restraint. Parallel developments in Google+ are leading Glass down the life-logging path, giving room – and the organizational tools – to store every moment that goes on around you, even if the hardware and software aren’t quite set up that way today.
No comments:
Post a Comment