The ability of the likes of Facebook to find our faces and recognise friends in embarrassing photos taken ten years ago is familiar to most of us.
Typically the technology works by mapping a face’s geometry; the relative positions and distances between the eyes, nose, brow, mouth and chin. Up to 70 ‘facial landmarks’ can be used to give a face its ‘facial signature’ and distinguish it from others.
This signature can be used to find other faces in a database with very similar signatures, and so identify your face in long-forgotten pics or video footage.
In recent years, thanks in part to more readily available facial recognition capabilities offered by major cloud providers, the same technique is being applied to identify people not just in photos from our college days, but from as far back as the 1860s.
Last year, developer Vignesh Sankaran built a tool which recognised alike faces in the State Library of New South Wales’ digitised image collection.
The web application used Amazon Web Services’ Rekognition facial detection and recognition capabilities, to pick out faces in photographs from the library’s Sam Hood collection.
Hood worked as a photographer and photojournalist predominantly in the Sydney area from the 1880s to the 1950s. A collection of more than 30,000 of Hood’s negatives were acquired by the library in the 1970s.
“Clicking on an image shows the results of the facial detection with bounding boxes around the detected faces. Bounding boxes coloured in dark blue are faces that have had similar faces detected in the sample image collection, with a 95 per cent degree of confidence,” Sankaran described.
Clicking on a blue box brings up any other photos in which that face appears.
“The results of the facial analysis were stored as JSON files in S3, alongside the images themselves. API endpoints for the front end were built with the Serverless framework in Node JS, and were hosted on AWS Lambda. Serverless handled the deployment and configuration details, and was quite easy to use. The front end was built with React JS, which I found to be a complementary technology to Node JS,” he said.
Potentially, the application could be further developed to attach names to any recognised faces, making searching for individuals in the collection far easier for library staff.
Similar work is underway on a larger scale in the US to match faces found in crowd-sourced and archived American Civil War photos.
In 2017, collaboration between researchers at Virginia Tech, the Virginia Center for Civil War Studies and Military Images magazine, resulted in the development of CivilWarPhotoSleuth (CWPS).
The tool uses facial recognition software to identify 27 ‘facial landmarks’ in photographs from the era uploaded by the public. CWPS then compares the unique facial reference points against the tens of thousands of photos in its archive.
“Face recognition allows us to find matches even when the soldier’s facial hair changes, or if a different view of him is in our archive,” the tool’s makers said.
“One of the greatest strengths of the site is that the more people use it, the more valuable it becomes. When you add an identified photo from your collection, it may instantly match a mystery photo that another user has been trying to identify for years. Likewise, if you search an unidentified photo and don’t find a match at first, you will be automatically notified if a potential matching photo appears on the site at any point in the future,” they added.
A public version of the site was launched in August.
The capability also has enterprise applications – particularly for media organisations wanting to find relevant footage or stills in their video archives.
“They have millions of hours of video content and its typically stored in multiple legacy systems, there is no or varying meta-tagging, and the search processes for finding content are extremely old and they’re manual and they cut across multiple systems,” explains Angus Dorney, co-CEO of Sydney and Melbourne-based cloud technology firm Kablamo.
“If you’re a newsmaker in a media organisation or work for a government archive and somebody asks you for a specific piece of footage it’s very difficult and time consuming and expensive to try and find,” he adds.
Kablamo builds solutions that have a “YouTube-like user experience” to find relevant archive footage. Using AWS face and object recognition tools, users simply type in a person or thing “and get a list back of prioritised rankings, where it is, and be able to click and access that example right away,” Dorney – a former Rackspace general manager – says.
The machine learning models behind the capability, over time, can refine and adjust their behaviour, making results more accurate and more useful to users.
“You really have a computer starting to function like a human brain around these things which is incredibly exciting,” Dorney adds.
Similar work is being undertaken by Danish firm Vintage Cloud. It uses a visual recognition API offered by Clarifai, to apply meta-tagging to old film stock in a product called Smart Indexing. The company recently announced a database of 100,000 faces that customers can access and match to those found in archive footage.
“Imagine if a producer came to you, needing footage of Marlon Brando, a fire in a skyscraper or a 1976 Ford Pinto,” said Peter Englesson, CEO of Vintage Cloud. “Smart Indexing your archive assets would allow you not only to quickly establish whether you had the desired clip but also to access it immediately – providing the opportunity to realise the value of that asset.”