Lifestyle

How technology will tell you who’s who at the Royal Wedding

Celebrity spotting at this weekend’s Royal Wedding will be an automated activity for millions of viewers in the UK, as European broadcaster Sky will use machine learning technology from Amazon Web Services (AWS) to name famous guests as they enter the chapel. 

Prince Harry and Meghan Markle are due to marry today.

The guests will automatically be labelled with a name tag when they reach George’s Chapel in Windsor Castle, and added to a list that includes a bite-sized biography and details on their connection to the Royal couple.

Viewers can then watch the service on-demand and navigate through the footage to the arrival of specific guests.

Sky claims that this is the world’s first live machine learning project on a large-scale event, but it may not have been possible if the wedding was just one week later, as the GDPR implementation date arrives on Friday 25 May.

Sky has not received the guest list, so the broadcaster researched the likely invitees and used their images as the training data set for the system to recognise their faces. Sky will delete this biometric data after completing the image recognition run, but they would not have been able to collect it under GDPR.

“Under GDPR, you need to obtain explicit consent from an individual before you do this,” Hugh Westbrook, senior product owner at Sky, tells Computerworld UK.

“We’re interested in this as a storytelling mechanism, and it might be that we need to take a different approach with certain projects in the future.”

How the image recognition service works

When Elton John, Victoria Beckham, and the other celebrity guests arrive at the wedding, a camera will capture their faces and send the footage to a broadcast van lurking outside.

A nearby AWS Elemental Live encoder compresses the feed and sends it to the cloud-based AWS Elemental Media Services for processing of the live and on-demand multiscreen content.

“Having the compression close to the machine learning service allows you to then make the different renditions, tie it to the metadata, and send it downstream,” says Keith Wymbs, Chief Marketing Officer at AWS Elemental.

“This type of innovation is difficult to imagine if you’re putting together a bunch of physical wires and boxes in a traditional production environment,” he added, so cloud technology really opens the door to this sort of project.

The celebrities are identified in real time and tagged with biographical information through the GrayMeta data analysis platform and the Amazon Rekognition video and image analysis service.

Sky’s editorial team quickly assesses the results before they reach the viewers through the “Royal Wedding: Who’s Who Live” service, which will be available through the Sky News app or website.

This takes them to a dedicated feed of arrivals, while Kay Burley presents from a studio in Windsor.

Why AWS and GrayMeta?

Sky spoke to a number of vendors about the project before plumping for GrayMeta due to their track record in data analytics, adaptability and a selection of partners that includes AWS.

AWS has implemented a similar service in a video on demand (VOD) environment to tag US Congress people when they stood up to make their legislative speeches live on C-SPAN, which televises proceedings of the US federal government, but this is the first time they’ve done it live.

The entire workflow of the service is deployed in the cloud, which makes it easier to scale and provision resources for unpredictable audience sizes.

“The experience I’ve had with AWS is that it’s resilient,” says Westbrook. “We have obviously conflicting audience sizes, and I’ve worked on a lot of high profile things at Sky like the general election, where the audience can rocket.

“You always want to be sure that when you’re working on that kind of a service you know that you’ve got lots of resilience and you can scale quickly if the audience spikes.

“I think the fact that there’s obviously lots of centres around the world and there’s lots of redundancy in-built and scalability, that just gives us the confidence around this solution.”

The service is now in the final testing phase. Westbrook thinks it could be the first of many Sky programmes enhanced by machine learning.

“We think it’s a really interesting method of storytelling,” he says. “We certainly want to explore what machine learning and automatic recognition of objects within video can do for us, because that’s a very interesting opportunity for us to analyse live data and tell people different things. I think we’ve definitely opened a theme where we can go with this.”

Wymbs believes that sports, news, and entertainment coverage could all benefit from the addition of machine learning-enabled video workflows.

“Imagine a cycling race,” he says. “It takes a long time to watch, and not every viewer is going to be satisfied with a five-hour long, linear presentation of the race. What if users could also access certain aspects of the event that are most interesting to them?

“That may be big climbs, or crashes, but it could also be things like the moment their favorite racer caught a breakaway, or fell off the back of the peloton. Add social sharing to the mix and there’s yet another layer of user interaction these kinds of technologies can enable.”

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines