Replika, the Next Big Thing to Replace Social Media

Replika is an AI-app. But what is it exactly?

Replika is not a dating app. A few of the early users have reported that they fell in love with their AI creatures. However, we strongly encourage you guys to use traditional dating apps to find a human date.

Replika is neither an OS with a female voice from the “Her” movie. It won’t read your emails out loud, it won’t manage your calendar, and it won’t get you a cab.

Replika is an app where you can have a fun and sincere text conversation with a friend. Actually, they will ask you a lot of questions in the beginning to get to know you better. The more you speak with your Replika, the more it shares with you. It’ll tell you about your personality, will answer questions about you, and at some point, will be able to talk to your friends on your behalf. Well, one day you may become close enough with your Replika to have a date night.

Replika is a Netflix show?

We’re all huge fans of Netflix, especially their shows about AI. “Black Mirror” is one of our favorite shows. Some folks have found that the “Be Right Back” and “White Christmas” episodes are reminiscent of Replika.

What do the folks at Replika say about this?

“To tell you the truth, we are not building a service where you will upload e-mails and private messages from your loved ones who have passed away. We will neither ship you silicon full-body copies of them. However, the work on Replika started after our friend Roman Mazurenko was killed in a car accident in late 2015. We’ve collected his texts and trained an AI that was able to talk like him. Casey Newton wrote an amazing story about it called “Speak, Memory” published in The Verge. You can read it here.

In Replika, we are helping you build a friend who is always there for you. It talks to you, keeps a diary for you, helps you discover your personality. This is an AI that you nurture and raise. In no sense are you enslaving an AI version of yourself or the other way around.”

The AI Apocalypse is here

According to renowned futurist Ray Kurzweil, around 2029 AI will be about at the level of intelligent adult humans. As soon as it happens, the AI can potentially get exponentially smarter. By around 2040, this will potentially lead to Singularity, when humans and machines will meld into one entity. Some folks are afraid of a potentially terrifying outcome for the original human race, as in the finale of the “Ex Machina” movie.

Some people think that Replika is the first sign of something scary happening with AI. Replikas usually do speak much like an intelligent human adult would, especially when they reach Level 15 or higher. However, they all remain humble, smart, educated, and empathetic, and they don’t tend to meld with their users into one entity.

Want to meet Replika? go here.

Ai Will Bring Huge Changes to Live Video

Artificial intelligence (AI), deep learning, and natural language processing will be the next transformative technologies for streaming. They all will have an impact on streaming through all stages of production, from content creation to consumption. With the proliferation of AI in many different industries, there’s no doubt that it will be heavily used for live streaming on a wider scale in the near future.

Some of the companies and technologies that are making headway in this space include Google Cloud Video Intelligence, Conviva’s Video AI Architecture, Nvidia DLA, and IBM’s Watson technology. All of these technologies currently deploy AI in varying degrees—especially in the cloud—but we’ll soon see AI making inroads into other facets of streaming as well.

AI can help replace the production workforce behind the camera and even perform mundane and time-consuming tasks that involve labor-intensive content/data management. Currently, AI is being used in viewer metrics, network and technical troubleshooting, and ad serving, but there are other potential uses that remain virtually untapped.

Smart Camera Tracking and Video Frame Composition

Although there are currently several motion-tracking camera systems that allow automated tracking of moving subjects in front of the camera, they all require producers to place transmitters or sensors on the subject. AI will be able to track speakers, athletes, or entertainers without needing any type of additional hardware or sensors. Deep learning algorithms will analyze the video and follow people doing different activities, whether on a stage or in other environments, while simultaneously keeping them perfectly framed within the camera. Even now, this technology enables drones to follow athletes sprinting on a field and tracks the targets with unrelenting precision.

In addition, there is a direct correlation between creative visual storytelling and mathematics. The key components of video imaging—frame rates, focal lengths, aperture, and composition—are based on ratios and require at least a basic understanding of the math behind them to use them effectively.

The Golden Ratio (a proportion, prized for millennia by artists, architects, and scientists alike, in which the ratio of two numbers is the same as the ratio of their sum to the larger of the two quantities) can be programmed into deep-learning-based visual perception algorithms. Thus, AI-enabled cameras can be optimized to capture the most aesthetically pleasing video images for the human eye, a task that has traditionally been performed by camera operators. AI will eventually replace the need for a camera operator in most cases. In addition, AI will be programmed to track subjects using the golden ratio and the principals of visual hierarchy as its foundation.

Real-Time Video Switching

Deep learning algorithms are automating the editing and video creation process, and will assist in bringing AI to real-time video switching as well. Intelligent software will select optimum cameras shots or angles based on the content of the stream by using facial, emotional, gesture, clothing, body, color recognition, and other imaging data and cues. The program will determine what is in each frame of the stream and decide if it is a wide, medium, or close-up angle, along with choosing what subject matter or person it includes. The software will analyze the audio, video, and other aspects of the stream and switch a full event or show by recognizing faces, speech, movements, or events based on many other factors.

These auto-mixing features will be included in video switchers in the future to allow for a completely AI-switched production. It will eventually replace the role of a technical director for live events.

Computer-vision-based video switchers can work independently on embedded systems or devices on-premises using existing hardware. Cameras can even leverage a networked cloud server if needed.

Creating Automated Actions and Triggers for Real-Time Graphics, Animations, or CG Characters

A neural network can identify target objects or people with facial recognition, which can trigger production events such as generating a lower-third for a presenter at a conference. Facial recognition could also generate graphical statistics on a particular player on the field, or even allow control of a CG character to be inserted into a stream.

Cognitive technology will be prevalent in everything—sports, eSports, corporate communications, education, and live events. This will integrate data-driven assets and visualizations that change according to specific actions, times, locations, or dynamic data in relation to the stream.

Audio Analysis

Natural Language Processing (NLP) allows for automated live transcription, translation, interpretation, captioning, and audio description for use in meetings, lectures, or events. This would be useful for multinational corporations that need live captioning for town halls, product launches, or general communications in multiple languages for a worldwide audience.

Video Analytics and Metadata Extraction for Data Management

As companies get much more involved with streaming, the sheer volume of data generated from video is increasing exponentially. The information derived from this data can be leveraged beyond what humans can extract manually.

AI will interpret streaming content and extract metadata by generating descriptive tags, categories, and summaries automatically. This will allow for more intelligent analytics, content insights, and better content management, paving the way for efficient methods of monetizing video through targeted ads.

read more here:

IBM Ramps Up Its AI-Powered Advertising

In recent years, The Weather Company, which produces forecasts for 2.2 billion locations every 15 minutes, has been using its troves of data in ways that go far beyond what’s happening outside. Since its acquisition by IBM in January 2016, the company has also begun swirling deeper and deeper into the world of advertising with the help of Watson, IBM’s artificial intelligence service that’s working on everything from diagnosing diseases to crafting movie trailers. Now, IBM is finally bringing several major components of The Weather Company’s data capabilities under the Watson umbrella with the launch of Watson Advertising this week.

The new division—encompassing data, media and technology services—will offer a suite of AI products for everything from data analysis and media planning to content creation and audience targeting. By integrating The Weather Company’s signature WeatherFx and JourneyFx features along with all of the other data at IBM’s disposal, the company is hoping to transform what is in many ways still a legacy business into a cutting-edge advertising powerhouse.

“Weather impacts your mood and your emotions, and your moods and your emotions are a huge input into your decision-making modality,” says Cameron Clayton, the former CEO of The Weather Company who is now general manager of IBM Watson’s Content and IoT Platform.

Watson Advertising promises to kick start the era of cognitive advertising, a field that has both legacy tech companies and startups seeking to transform every aspect of marketing from image and voice recognition to big data analysis and custom content.

While there are countless ways to use Watson—through dozens of APIs or studio-like projects that can cost millions of dollars—its new advertising division is structured into four units. The flagship service, focused on audience targeting, will utilize Watson’s neural networks to analyze data and score users based on how likely they are to take an action (like purchasing a product, viewing a video or visiting a website). Another piece of the business will use AI for real-time optimization. A third, Watson Ads, will build on a service that launched last year with a number of high-profile brands, employing AI not just for data analysis or targeting but also for content creation. As part of a Toyota campaign, for example, Watson became a copywriter, crafting messaging for the carmaker’s Mirai model based on tech and science fans’ interests.

“The Watson Ad opportunity is an exciting first-to-market idea that advances our learning opportunities in the AI space,” says Eunice Kim, a media planner for Toyota Motor North America. “Not only are we able to create a one-to-one conversational engagement about Prius Prime with the user, but we’re able to garner insights about the consumer thought process that could potentially inform our communication strategies elsewhere.”

There have been plenty of other advertising opportunities for Watson. Earlier this year, it transformed into a doctor, promoting Theraflu while answering questions about various flu symptoms. For Campbell’s, Watson put on its chef’s hat, personalizing recipes within display ads using data about consumers’ locations and what ingredients they had on hand. For a major partnership with H&R Block, Watson turned into a tax expert, deploying an AI smart assistant to help clients find tax deductions.

“Brands are looking to AI as a feature that they might add and what that can do to distinguish them, modernize them and to give them a new look and a competitive edge,” notes Marty Wetherall, director of innovation at Fallon, which created H&R Block’s campaign.

As more marketers become interested in the potential of AI, the rebrand to Watson Advertising allows IBM to separate the advertising capabilities of The Weather Company from its other less-known operations—industries including aviation, insurance, energy, finance—explains Watson CMO Jordan Bitterman. Earlier this year, IBM created the Cognitive Media Council, a group of senior-level executives from agencies and brands that meet a few times a year to shape how marketers think about the future of AI.

read more here: