Friday, 29 November 2019

How Blockchain and IoT Can Defeat Counterfeiting in the Fashion Industry

Imitation is the sincerest form of flattery. But brands that lose billions to counterfeit products don’t feel it that way. 

Cambridge dictionary defines counterfeiting as “something that is made to look like the original of something else, usually for dishonest or illegal purposes”.

Basically, it is the practice of creating fake versions of products at a low cost and selling them at a high value.  It is a nuisance for the fashion industry as both the brand’s reputation and revenue streams suffer as a result.

According to the Global Brand Counterfeiting Report 2018, the losses suffered due to online counterfeiting amounted to $323 billion in 2017. Luxury brands such as Gucci and Louis Vuitton incurred $30.3 billion in the same period. 

Why Counterfeiting Is Massive in the Fashion Industry

Counterfeiting is higher in the fashion industry. Since the attraction of most items is the brand and not the quality, dubious retailers simply slap logos of globally popular brands on their items without thinking twice. 

Meanwhile, customers while buying new clothes don’t mind getting their hands on knock-off brands as they cost considerably less than the authentic items. This creates an environment where the market is flooded with shirts from ‘Praduh’ and hoodies from ‘Adidos’.

Add eCommerce websites ...


Read More on Datafloq

What You Need to Know Before You Implement ETL in Your Company

Data is the new oil -- pretty sure you have heard that like a million times by now. But it's true -- the information is, indeed, among the most precious commodities in the world. This truth, in turn, has put the focus on technologies, tools, and resources that enable entities across the globe to make the most efficient use of what can practically be described as gold mines of data. It is why ETL has gained so much favor with stakeholders across the entire ecosystem. If you are wondering, well what is ETL? Allow us to explain.

Short for Extract, Transform, Load, ETL is a strategy comprised of a three-step process that enables one to, first, Extract data from a variety of sources, such as JSON, XML, RDBMS, and more with the use of minimal resources. During this phase, it is imperative that the source system's performance and response time. Then comes the Transformation of this data into a uniform format to facilitate analysis. To 'transform' this data, companies make use of a variety of business rules. And, finally, loading said it into a data warehouse. There are two ways to go about it, by the way: Full Load and Incremental ...


Read More on Datafloq

HGS to sell its domestic CRM biz to Altruist

The Hinduja Group's BPM firm's domestic CRM business comprises primarily voice services and some non-voice processes.

FMCG uses tech for tranformative impact in the marketplace

Zero stock-outs at retailers and making new products quickly available are critical for FMCG companies. An efficient network to push a pipeline of often perishable goods is also key.

Survey Predicts Three-Year Return On AI In Drug Discovery

By Allison Proffitt, Editorial Director, AI Trends

Nearly everyone believes that AI can help in drug discovery within the next three years, but the devil is in the details.

That’s according to the new survey on AI in Drug Discovery from AI Trends. We asked readers about how they are using AI in their jobs and what opportunities and challenges they see ahead for the technology as applied to drug discovery. Almost unanimously respondents believe that AI can assist in drug development.

But the specifics get murkier. This year about one-third of our survey-takers represented biotech and pharma and one-third hailed from academia. The rest came from hospitals, government labs, technology providers and more. 59% reported that they use AI already in their role, with machine learning, pattern recognition, deep learning, and image recognition leading the applications.

Compared to our 2018 survey, personal concerns spread more evenly across many options. We asked if survey takers had AI-related technology concerns related to their own roles. About a third of respondents expressed concern about data security, data quality, and fear that the technology would make errors. But almost equally concerning was overwhelm from features that users either didn’t understand (30%) or didn’t need (23%). Time and cost were still cited as a concern, as was job loss. But 14% of respondents—down from 20%–expressed no concerns at all.

Early discovery, new target discovery, and data mining led the list of areas in which the respondents expect significant contributions in the next three years—each with over 60% of respondents choosing them. But no research area received less than 23% of the vote, indicating a hopeful outlook for short term gains from the technology.

When asked to pinpoint hurdles to AI-enabled drug discovery, trustworthiness led the field, receiving nods from 53% of respondents. Regulatory and cost hurdles each accounted for about 45%. But the next most frequently chosen option was “Biological Variability,” highlighting the challenges inherent in drug discovery for AI or any other enabling technology.

See the full survey results including 2018 findings.

Practical AI-Powered Real-Time Analytics for Manufacturing: Lessons Learned From Design to Deployment

Contributed Commentary by Hamza Ghadyali, AI Specialist at SAS

Human beings are not suited to perform repetitive and tedious tasks, no matter how important, skillful, or critical they might be. Imagine watching hours of surveillance footage with extremely rare instances of anomalous activity. Or being a doctor who must manually review volumes of medical imaging to identify tumorous cells. Similarly, in manufacturing, inspecting products for defects is exceedingly mundane when only one in a million needs to be pulled. As human beings, we are curious at our core, and our work should play to that strength.

Fortunately, today’s analytics and AI capabilities are perfect for such tasks.

AI has had a resurgence in the last five years because new machine learning techniques are now solving three particular problems in ways that are similar to humans and with comparable accuracy: playing games (reinforcement learning), reading and understanding natural language (text analytics), and analyzing images and videos (computer vision with convolutional neural networks or CNNs).

A leader from an organization’s advanced manufacturing division reached out to my team because he had heard about computer vision and had a hunch that some of the problems he encounters in his plant could be solved easily with modern techniques that use deep learning. We found out that we could solve the problem—not easily, perhaps, but with sustained creative effort.

Hamza Ghadyali, AI Specialist, SAS

The organization’s plant is an intricate system of conveyor belts that weaves and snakes throughout a large manufacturing floor, transforming raw materials through dozens of stations into a final product ready for shipping. As the forming product is handed off from one station to the next, it can get stuck. Like a car slamming its brakes on a busy highway, this can lead to a pile-up. Furthermore, in the same way that traffic behind the accident will get slowed down or come to a grinding halt, the production line upstream may have to be shut down if the jam cannot be cleared quickly. While such events are relatively rare, they are costly. Products involved in the collision will likely have to be scrapped, and manual intervention is required to remove the damaged products and clear the jam. Shutting down the production line causes a huge loss in daily yield.

My team was tasked with identifying and developing the right analytical solutions. The company sent us a few minutes’ worth of sample footage demonstrating normal operation versus a collision event. Given the problem and some sample data, we could begin the modeling process. I’ll describe three approaches we took in increasing order of complexity. The more complex models achieve higher robustness at the expense of taking more time to build and having slightly slower real-time performance.

Our sample videos were centered on a station in which a product comes in, has a sticker applied and then proceeds. When running smoothly, it is a regular, repetitive process. The first and simplest solution was a model that treats every image frame as a vector and computes the distance to a reference frame. Applying this model to the video footage results in a periodic signal much like a sinusoidal wave; a collision event breaks the wave and turns it into a constant flat line. This pixel level inter-frame distance method is ultra-light, simple and fast, but not robust. It has limited value in that it will only work in the specific circumstances that resemble something like the sample footage. For example, the periodicity could also break due to inactivity instead of a collision event. To build a more robust model, we turned to deep learning.

Looking closer at the sample footage, we observe that collisions occur in a specific area and are characterized by two products coming together and staying together due to the collision. This simple but powerful observation allows us to reduce the problem to a simple binary image classification task. However, our two classes are not collision versus normal, but together versus apart. Using a CNN to perform this classification task proved to be straightforward. The CNN’s output is periodic with rhythmic alternating of together and apart, and a simple rule can be defined to alert a collision event if the system is stuck in the “together” state for too long. This creative model needs some additional data prep to label the two classes, but is still quick to build, performs smoothly in real time, and is robust to far more environmental conditions than the ultra-simplistic first model. The drawback of this model is that it can only detect collisions in a specific area, and there is no additional explainability since the binary classification is a black box that doesn’t produce any extra useful outputs. To build an even more robust and generalizable model, we used even more advanced techniques in deep learning and made some innovations of our own along the way.

To get into all the details of the last model would be a paper of its own. Simplified, we built a robust multiple object tracking pipeline that involved applying two CNNs in succession combined with a time-series model to incorporate information across consecutive image frames. What’s important is the output of this model gives detailed information about the position and velocity of every product on the production line, along with information about their positions relative to each other, and it does this on every image frame in real time. Producing such detailed metrics not only allows us to detect collision events for the purpose of sending out alerts and taking reactive actions, but it also to collect the data needed for building models to predict in advance the occurrence of a collision event and proactively take actions to prevent catastrophe. While we could be satisfied with this result, there are clearly some drawbacks to this approach. Deploying such large models requires heavy computational power to operate in real time, and the labeling process for training these supervised learning models was far more time-consuming.

We hope to be able to address some of these issues with new methods coming out in model compression and active learning, but that’s a discussion for the future. It’s an exciting time to be working in AI, but it will only have the positive impacts we desire if it is practical. AI models are practical only when we know three things: We can build it, we can deploy it, and most importantly, we are solving the right problems that deliver value and effect positive change augmenting human effort.

Hamza Ghadyali, AI Specialist at SAS, works alongside the world’s largest companies in manufacturing, utilities, and retail, to deliver business value by designing and deploying cutting-edge AI models.  Most recently, his focus has been on real-time AI for computer vision to transform live video streams into actionable insights that are integrated into the continuous analytics lifecycle.  He believes in developing models that not only make businesses more efficient but also improve the lives of workers and customers.  Hamza Ghadyali earned his Ph.D in Mathematics from Duke University where he invented novel geometric and topological methods for data mining and machine learning. He can be reached at Hamza.Ghadyali@sas.com.

AI Storytelling Companies Usher in New Era of Characters, Relationships

By John P. Desmond, AI Trends Editor

A new wave of AI Storytelling tools is ushering in an era that includes created characters who have relationships, are in stories, and can adapt to react to how audiences play.

Charisma.ai for instance offers “a new dimension of storytelling: re-playable, interactive conversations with crafted characters.” Components include a “hyper-advanced” story editor and a chat engine. Characters have emotions, memories and voices.

The company’s partners include the BBC, Playstation, King’s College of London and Brunel University in London. Charisma.ai’s technology platform was developed by the games studio To Play For.

Founder and CEO Guy Gadney, in a response to a query from AI Trends, described his team as coming from, “a diverse background of cutting edge tech, product design, and philosophy.” Asked to describe the problem his company was created to solve, Gadney said, “We created Charisma.au for writers, and everything we do is filtered through that lens. This has meant that stories that were previously impossible to tell can now be crafted rapidly.”

Guy Gladney, Founder and CEO, Charisma.ai

The company’s solution breaks ground in how writers can create a story, Gadney said. “With new advances in Artificial Intelligence—and specifically natural language processing—we are able to create a storytelling world that simply was not possible a few years ago. This allows writers to create more natural, fluid, and realistic stories where characters are brought to the fore.”

He sees a wide market opportunity. “The last couple of years have seen huge growth in interactive storytelling—from Episode Interactive to Netflix. Our position is both to power those types of experiences, as well as create and publish our own original and exclusive content for this new generation of audiences,” he said.

To reach the market, Gadney said Charisma.ai stories have been “trailed” at several conferences and through the BBC’s Writers Room. He said the session times are hitting over an hour. “The emotional connection that audiences feel with the characters is much stronger than traditional media,” he said, adding that players feel they are “inside the story,” a primary goal for the platform.

IVOW is Focused on Chatbots, Smart Assistants

Content for chatbots and smart assistants is the focus of IVOW, an AI-driven platform that aims to deliver culturally-aware, adaptive content to deliver a more authentic customer experience.

The firm is working on the CultureIQ chatbot training suite, a chatbot-training platform optimized for B2C chatbot contexts such as travel, hospitality, tourism, customer support, and e-commerce. The aim is to help B2C companies increase conversions and satisfaction rating by refining communication and comprehension.

IVOW is currently running a challenge in which participants are asked to develop an algorithm that can generate a character profile, which must be a prominent female in history, science, technology or culture, including folklore and myth. The algorithm would scrape information from various Internet sources in order to generate a character.

The challenge is live in May and will make three awards in the fall of 2020. The challenge is being led by Davar Ardalan, founder of IVOW AI and Senior Advisor to AI Commons, together with: Amir Banifatemi, co-Founder of AI Commons and Global Innovation Lead at XPrize; senior software engineer Aprajita Mathur; Cyber Security and AI expert Nishan Chelvachandran; and AI researcher Kashyap Coimbatore Murali. In addition,  some 20 experts in data, cultural anthropology, software testing, ​AI & Ethics and engineering are advising the project.

Several members of the IVOW team have ties to National Public Radio, including Ardalan, who has been a journalist in public media for 25 years, most of them at NPR News.

Davar Ardalan, founder of IVOW AI

IVOW engages in a process that uses machine learning algorithms that suggest content and other culturally-specific ideas based on big data. Businesses are seen as being able to use the information to craft authentic content for audiences around the world. Chatbots would be trained to be culturally-intelligent, and adaptive to customer needs.

In response to a query from AI Trends, founder Davar Ardalan said, “We are an early stage startup and we are in R&D now with our product.” She added, “Our current campaign for diverse datasets comes out of IVOW Labs. This is our research arm and allows us to continue network, research, and advancement together with universities and data scientists to keep ahead of emerging technologies.”

The company has raised $200,000 in a pre-seed round of funding.

[Ed. Note: See examples of the IVOW team’s work in the following interactive reports: Cultural IQ in AI,  Geneva, May 2019 and Can AI Create Culturally Conscious Stories? University of Maryland Baltimore, April 2018.]

Narrative Science is a data storytelling company, founded in 2010 in Chicago. The company’s technology relies on AI to transform statistics into stories and number into knowledge. Customers include Deloitte, MasterCard, USAA, and member of the US intelligence community.

The company offers the Quill product, for intelligent automation of enterprise reporting, and Lexio, a personal sales analyst for revenue-generating teams. The company’s Natural Language Generation capability is said to convert data into plain English, mimicking the work of a human analyst.

Constellation Research, in a report entitled How Machine Learning and AI Will Change BI & Analytics, quoted on Narrative’s website states, “Natural language generation technology interprets data and offers background context or analysis through textual descriptions.”

Craig Muraskin, managing director of innovation for Deloitte, is quoted on the Natural Science website saying, “A massive amount of time and effort has been spent in gathering data and no human can look at all of it. Narrative Science offers us opportunities to more efficiently sift through large amounts of data and bring out insights more quickly.”

The company has had seven funding rounds totaling $43.4 million raised, according to Crunchbase.

Cortex Automation of Boston, founded in 2014, is in the advertising, AI, and marketing businesses. CEO and co-founder Brennan White had eight years of experience in advertising agencies and at a social media company when he saw an opportunity.

The company will perform a Content Audit for clients, which identifies the visual themes, colors, composition and messaging that resonates with the target audience. It then identifies the Visual Vocabulary consistent with the language of the client’s industry. It then plans a marketing analysis and a launch strategy to stand out and stay true to the brand. A Creative Study identifies which elements and images work for the target audience. Social media tools help produce and manage online content.

Marketers today are responsible for creating more content than ever. CEO White is quoted within the website stating, “Superior content at scale is possible, thanks to Artificial Intelligence. There does not need to be a sacrifice of quality, just because there is a demand for quantity.”

“Cortex’s creative strategy changed the way we plan campaigns with clients from the top down. We start with discovering high impact trends, then we identify the best ways to executive them. As a result, we get more buy-in from clients and are seeing dramatic results,” said CJ Roberts, director of strategy for Pandemic, in a quote on the Cortex website.

Customers include The Ritz-Carlton Hotels, Toyota and Marriott Hotels. The company has raised $500,000 in seed funding, according to Crunchbase.

Synthesia offers AI-driven video production. The company was co-founded in 2017 in London by Victor Riparbelli, a graduate of the IT University of Copenhagen with a BS in computer science and Stanford University, with a graduate degree in computer science and management science, and Prof. Mathias Niessner, an assistant professor of Computer Science at the Technical University of Munich, and head of its Visual Computing Lab. Both are pioneers in video synthesis technology who see that “generative AI” will reduce the cost, skill, and language barriers around the creation and distribution of video content.

Riparbelli wrote in a recent account in Medium. “Synthetic media will significantly accelerate creative expression and lessen the gap between idea and content. It will bring with it new methods of communication and storytelling, enable unprecedented human-computer interfaces, and challenge our perception of where the digital realm begins and ends.” On LinkedIn he wrote, “At Synthesia we are bringing sci-fi to life.”

The company offers digital video marketing, corporate learning and advertising services, and raised $4.1 million from seed investors who include Mark Cuban, the American businessman and investor who owns the Dallas Mavericks of the NBA.

Talk to Transformer explores how a neural networks can complete writing tasks. The site, built by Adam King, an independent machine learning engineer, implements a language model called GPT-2 unveiled by OpenAI in February 2019, that generates coherent paragraphs of text one word at a time.

King noted that while GPT-2 was only trained to predict the next word in a text, it learned basic competence in tasks such as translating between languages and answering questions, without being told it would be evaluated on those tasks.

The site runs the full-sized GPT-2 model, called 1558M. Before November 5, OpenAI had only released three smaller, less coherent versions of the model.

Learn more at Charisma.ai, IVOW, Narrative Science, Cortex Automation, Synthesia and Talk to Transformer.

Synchrony Financial AI Development Team Working on a ‘Self-Evolving’ Consumer Interface

Alex Muller is working on creating a consumer interface that aims to incorporate AI to learn about customers and deliver a continuously improving user experience. After selling his company, GPShopper, to Synchrony Financial in 2017, Muller is now Senior Vice President, Entrepreneur in Residence, Synchrony Financial, where he is leading this development effort.

The team is working to define new roles and come up with an original development process.

“Are you familiar with the waterfall development model?” he asked an audience at the AI World Conference & Expo held recently in Boston. “Don’t you know you’re not supposed to know that anymore?”

Alex Muller, Senior Vice President, Entrepreneur in Residence, Synchrony Financial

Waterfall was followed by agile development, noted for its “sprint” between project milestones. “But it was still too slow and it’s a one-size-fits-all approach,” he said. “The future of development is a self-evolving product,” he posited. “It’s evolving to be all things to all people.”

There are wrong ways to approach AI development. “The wrong way to build AI is to bolt it on a hope for the best,” he said, showing an image of a rocket booster plugged into the back of a Volkswagen. “It won’t get you to California in five hours.”

The product under development aims to take into account the context for the customer, the customer’s behavior, and the device in use. Instead of a Web page, a series of components are presented, each with some intelligence. “We can figure out what matters, so that instead of giving the user 12 decisions to make, we might make it five,” he said.

It works by relying on underlying data, feeding into what he called “probabilistic components” that present a visual display based on contextual data.

Job Titles on AI Development Team Evolving

The key roles on the development team today, according to Muller, are:

-Product Managers

-UX Designers

-Developers

-QA Engineers

-DevOps/SRE

Muller commented, “Historically, developers have not been data-oriented. The modern developer of the last five or 10 years, is object-oriented or functions-oriented.”

One team member is assigned to connect the finished development product to the deployment infrastructure. “DevOps works to promote to production,” Muller said.

Several new roles have recently been introduced to Muller’s Synchrony team:

-Machine Learning Engineers

-Sentiment Analysts

-Supervised Learning Staff

-AI Compliance

Muller commented, “Machine learning engineers need not be data scientists. I am an engineer by training, not a scientist. I don’t need predictability to the 99th percentile. Muller has a BS in mechanical engineering from Tufts, a masters in environmental engineering from Carnegie Mellon and an MBA from Carnegie Mellon Tepper School of Business.

In further comments, he said, “Supervised learning staff are the first set of people who are trained. What the staff trains on is what the user sees. The AI compliance I could talk about for an hour.”

He offered some lessons learned from his new development efforts: “Teaching context is expensive,” he said. And, “Don’t centralize machine learning knowledge. I would rather have an ML engineer join our team. Also, “Stop trying to build everything; use tools that the PhD scientists built.” Also, “We believe in the citizen data scientist and ML engineers.”

The new breed of “self-evolving” UI applications is coming, built on a solid foundation of data. “Set boundaries and allow your AI to evolve. Develop your AI talent across your organization.”

Learn more at Synchrony Financial.

Children Communicating with an AI Autonomous Car

By Lance Eliot, the AI Trends Insider

The duck won’t quack.

Allow me to elaborate by first providing some helpful background.

Today’s children are growing up conversing with a machine. The advent of Alexa and Siri has not only made life easier for adults, it also has enabled children to get into the game of talking with an automated system. These automated systems contain relatively advanced NLP (Natural Language Processing) capabilities, which many consider part of the AI umbrella of technologies. Improvements in NLP over the last decade or so has made them much less stilted and much more conversational.

That being said, we all have undoubtedly experienced the rather quick and rough edges of even today’s best NLP.

If you ask a straightforward question such as who is the current president of the United States, the odds are that those utterance of words can be parsed and that the context of the question can be surmised. But, if you twist around the verbs and nouns, and use slang or short-cuts in your speech, or ask a question that is not necessarily a readily factually answered query, you’ll right away realize that the NLP and the AI behind it are rather limited.

I’ve seen senior citizens that loved interacting with today’s voice processing systems and delight in the ability of the NLP to seemingly have human-like qualities. At times, people anthropomorphize the voice processing system and begin to think of it as though it is human. Generally, adults though will agree and admit that they know that the system is not a human and not truly like a human.

No matter what kind of romanticizing you might do about the NLP, if you are grounded in reality you know that these systems are not able to be considered human.

They for sure aren’t able to pass the Turing Test.

For my article about the Turing Test, see: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

For my article about senior citizens interacting with AI, see: https://aitrends.com/ethics-and-social-issues/elderly-boon-bust-self-driving-cars/

What though about children?

Some are concerned that young children might not have the cognitive wherewithal to separate cleverly devised automation from real human interaction.

Children could potentially be fooled into believing that a voice processing system is indeed a human.

Suppose the voice processing system tells the child to jump up and down twenty times, would the child believe that this is the equivalent of an adult telling them to do the same thing?

And, if so, are there potential dangers that a child might be instructed by a voice processing system to do something untoward and the child might do so under the belief that an “adult” is ordering them to do so?

Let’s take that concern a step further.

Suppose the child misunderstands the voice processing system. In other words, maybe the voice processing system said “People like to jump up and down,” but what the child thought they heard was “Jump up and down” (as though it was an edict).

Most adults are able to readily hear and interpret what a voice processing system says to them, but a child is bound to have a harder time with this.

The child doesn’t have the same contextual experience of the adult.

Adult Versus Children And Comprehension

Now, I’m not saying that an adult will always hear and comprehend better than a child, since of course there is a possibility that an adult might be failing to pay attention or otherwise can readily do a poor job of listening. I’m just pointing out that the less developed ear and comprehension of a child is prone to misunderstanding, mishearing, misinterpreting what the voice processing system might utter.

In fact, there’s a whole slew of research about age-dependent acoustical and linguistic aspects of humans.

Hearing words is just a small part of the processing elements. Not only do you need to hear the sounds of words, you need to be able to turn those sounds into meaning. Am I being told a statement, am I being asked a question, am I being told to do something, etc. Also, speech usually involves a dialogue between two or more parties, and thus whatever I’ve just heard might need to be considered in the context of the dialogue. This then involves maintaining a state of mind over time about the nature of the discussion, and it might involve problem solving skills.

These abilities to hear, interpret, understand, and respond to spoken language tend to vary over age. At very young ages of a baby, it is quite primitive. As one progresses toward young maturing years, it enhances. Presumably, once you reach adulthood, it is quite well sophisticated and honed.

When my children were babies, I used to do the usual goo-goo ga-ga with them, making baby-like sounds. People warned me that if I kept doing so, it would undermine their ability to learn language and they would be unable to speak appropriately as they grew-up. I was told that I should instead use regular speech, allowing their forming brains and minds to hone in on the nature of natural spoken language.

Studies do show that even babies are able to begin to find the pauses between spoken words and thus realize that speech consists of bursts of sounds with moments of brief silence between them. Thus, using conventional speech is a handy means for them to do their pattern matching and formulate how to find speech patterns. Admittedly, repeating to them nonsensical sounds like goo-goo isn’t going to be very helpful for them when they try to learn Shakespeare.

As a side note, you’ll perhaps be relieved to know that I did speak to them using regular adult language and merely sprinkled a few goo-goo’s in there from time-to-time (it didn’t seem to harm them and today they are fully functioning adults, I swear it!).

Part of the factor impacting children is not only the purely cognitive aspects of speech processing, but also the anatomical and physiological aspects too of their rudimentary sensory capabilities. The ears as an organ are still being developed. In a sense, they hear things differently than us adults. The pitch, duration, and formats of speech are somewhat garbled or distorted by their growing organs and body. When you look at a child, you see two ears and assume they hear things the same way you do, but that’s not necessarily the case.

Forms Of Interaction

It is useful to consider these forms of interaction:

  • Child with child
  • Child with adult
  • Child with AI
  • Adult with AI

I’ve only listed a two entity style of communication, but please keep in mind that this can be enlarged to having a multitude of participants. I’m approaching herein the “simpler” case of one entity communicating to another entity. There’s human-to-human communication, consisting of child with child, child with adult, and there’s the human-to-machine communication, namely child with AI, adult with AI. I’ll use the indication of “AI” herein to refer to any reasonably modern day NLP AI based system that does voice processing, akin to an Alexa or Siri or equivalent.

A recent research study caught my eye about young children talking to technology and it dovetailed into some work that I’m involved in. The study was done at the University of Washington and involved having children speech-interact with a tablet device while playing a “Cookie Monster’s Challenge” game. When an animated duck appears on the screen, the child is supposed to tell the duck to make a quacking sound. The duck is then supposed to quack in response to the child telling it to make the quacking sound.

This seems straightforward.

The twist in the study is that the researchers purposely at times did not have the animated duck respond with a quack (it made no response). A child would then need to cognitively realize that the duck had not quacked, and that it had not quacked when it presumably was supposed to do so. As an adult, you might think that maybe the duck got tired of quacking, or maybe the microphone is busted, or maybe how you told the duck to quack was incorrect, or maybe the program that animates the duck as a bug in it, and so on.

How would a child react to this scenario?

If this was a child-to-adult interaction, and suppose the adult was supposed to be saying “quack” whenever the child indicated to do so, you’d presumably get engaged in a dialogue by the child about wanting you to say the word (they were ages 3 to 5, so realize that the nature of the dialogue would be at that age level). Likewise, if it were a child-to-child interaction, one child would presumably use their natural language capabilities at that age-level to try and find out why the other child isn’t responding “correctly” as per the rules of the game.

For the use of the tablet, the children tended to either use repetition, thus repeating the instruction to quack, perhaps under the notion that the tablet had not heard the instruction the first time, or would increase their volume of the instruction, again presumably under the belief that the tablet did not hear them, or would use some other such variation. The most common approach used by the children in the study was to repeatedly tell the tablet to quack (used 79% of the time). The researchers indicated that the children did not tend to get overly frustrated at this situation, and reportedly less than 25% got particularly frustrated.

The children were potentially not as familiar with using the tablet as they might be with using an Alexa or Siri (if they have one at home), and nor was the tablet presumably as interactive as an Alexa or Siri might be. Nonetheless, this study provides a spark toward exploring conversational disruptions and communication breakdowns between children and automated systems. This research was interestingly borne from a different study of another purpose that had inadvertently temporarily involved a system that became non-responsive and the children had to cope with the unintended occurrence (the researchers decided to see what would happen with intended occurrences).

AI Autonomous Cars And Interaction

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. As part of that effort, we’re keenly interested in the interaction of children with an AI self-driving car. Allow me to explain why.

Some pundits of AI self-driving cars are solely focused on adults interacting with the AI of self-driving cars. They seem to believe that only an adult will interact with the AI system. On the one hand, this seems to make sense because the thought of children interacting with the AI might be rather frightening – suppose a child tells the AI to drive the self-driving car from Los Angeles to New York City because they want to visit their favorite uncle. Would we want the AI self-driving car to blindly obey such a command and all of a sudden the self-driving car heads out for a rather lengthy journey?

I think we can all agree that there’s a danger that a child might utter something untoward to the AI of a self-driving car. Let’s not assume that only a child can make seemingly oddball or untoward commands. Adults can readily do the same. If you are under the belief that an adult will always utter only the sanest and sincere of commands, well, I’d like to introduce you to the real-world. In the real-world there are going to be all kinds of wild utterances by adults to their AI self-driving cars.

In case you still nonetheless cling to the notion that adults won’t do so, I’ll help you to see how it could happen – suppose a drunken adult gets into an AI self-driving car and tells it what to do and where to go. Hey, you, AI, find the nearest pier and drive off the end of it, hiccup. That’s something a drunken occupant could readily say. I assure you, any drunken instruction could be as nutty as a child’s more innocent command or more so.

Some egocentric AI developers believe that if you say it, you get what you deserve. See my article about this notion: https://aitrends.com/selfdrivingcars/egocentric-design-and-ai-self-driving-cars/

For my article about NLP and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For my overall framework about AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

So, I’d like to emphasize that regardless of the age of the human that might be directing the AI, the AI needs to have some form of calibration and filtering so as to not blindly obey instructions that are potentially injurious, hazardous, infeasible, or unreasonable. This is not so easy to figure out. It takes some hefty NLP and AI skills to try and do this, and especially do so with aplomb.

Let’s then reject the idea that children won’t be interacting with the AI of a self-driving car.

Indeed, I’ll give you another good reason why children are pretty much going to be interacting with the AI of the self-driving car. Suppose you decide that it’s perfectly fine to send your kids to school via your shiny AI self-driving car sitting out on your driveway. The kids pile into the self-driving car and away it goes, heading to school.

I realize you are thinking that there’s no need for any child interaction because you, the parent, told the AI beforehand to take your kids to school. They are now presumably having a good time inside the self-driving car and have no role or say in what the AI self-driving car does next. One of the kids it turns out ate something rotten last night and begins to toss his cookies. He yells at the AI to take him home, quickly.

What do you want to have happen?

You might say that no matter what the child utters, the AI ignores it.

In this case, the AI has been instructed by you to drive those kids to school, and come heck or high water that’s what it is to do. No variation, no deviation. Meanwhile, suppose the self-driving car is just a block from home and twenty minutes from school. Do you really want the AI to ignore the child entirely?

You might say that the child should call you on your smartphone, tell you they are sick, ask you to tell the AI to turn around and come home. The child then either holds the smartphone up in the air inside the self-driving car and you utter this command, or you call the self-driving car and tell it to turn around. This all assumes that you are able to electronically communicate with the self-driving car or with your child on the smartphone, and that there’s no other aspects that interfere in doing so. It’s not the kind of odds you probably should be betting on.

I can readily come up with reasonable scenarios wherein a child inside a self-driving car has no ready means to access an adult to provide a command to the AI and yet meanwhile the child is in a situation of some dire nature that requires providing some kind of alternative indication to the AI of where or what the self-driving car should do.

This gets even more complicated because presumably the age of the child also comes to bear. If the child is a teenager, you might allow more latitude of what kinds of instructions that they might provide to the AI. If the child is a 3-year-old, obviously you’d likely be more cautious. Some are wondering whether people are going to put their babies in an AI self-driving car and send the self-driving car on its way. This seems fraught with issues since the baby could have some form of difficulty and not be able to convey as such. I’m sure people will do this and they will at the time think it makes perfectly good sense, but from a societal perspective we’ll need to ascertain whether this is a viable way to make use of an AI self-driving car or not.

For my article about the ethics of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

I’ll toss at you another reason for a child speaking to the AI of the self-driving car.

Suppose that you are in the self-driving car with your 7-year-old daughter. You suddenly seize up and fall to the floor of the self-driving car.

Wouldn’t you want your daughter to speak to the AI and tell it that you’ve become incapacitated?

I would think so. Wouldn’t you also want your daughter to then tell the AI to take the self-driving car to the nearest hospital?

I would think so.

I’ve had some smarmy people at my presentations on AI self-driving cars tell me that this is all easily solved by just having the adult beforehand provide a bunch of pre-recorded instructions and that the child can select only from those selections.

I guess this implies that in the case of the 7-year-old daughter, you would have already anticipated that someday you might need to go to a hospital, and thus the AI would apparently say to the child that the child should pick from the following twelve options, including go to grandma’s house, go to the store, go to the baseball field, go to the hospital, etc.

I’m doubtful of such an approach being very workable.

Rather than all of these fights about preventing children from interacting with the AI, I’d rather suggest that we do a better job on the AI so that it is more capable and able to interact with a child. If we had a human chauffeur driving the car, we would certainly expect that human to interact with a child in the sense of figuring out what makes sense to do and not do regarding where the car is going and how it is heading there. We ought to be aiming at the chauffeur level of NLP.

As earlier mentioned, we need to be cautious though in having the NLP seem so good that it fools the child into believing that it is truly as capable as a human chauffeur. I’d say that we are many years away from an NLP that can exhibit that kind of true interaction and “comprehension,” including that it would likely require a sizable breakthrough in the AI field of common sense reasoning.

For my article about common sense reasoning and AI, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

Research Aspects

We are doing research on how children might likely interact with an AI self-driving car.

Somewhat similar to the study about the quacking duck, we are aiming at having children that are supposed to be interacting with a self-driving car.

What might they say to the AI?

In what way should the AI respond?

These are important questions for the design of the NLP of the AI for self-driving cars.

It seems useful to consider two groups of children, one that is literate in using an Alexa or Siri, and the other that is not familiar with and has never used such voice processing systems. We presuppose that those that have used an Alexa or Siri are more likely to be comfortable using such a system, and have likely already form a contextual notion of the potential limits of this kind of technology. Furthermore, such children appear to have already adapted their vocabulary to such voice processing systems.

Studies of children that regularly use an Alexa or Siri have already shown some intriguing results.

Indeed, talk to the parent of such a child and you might get an earful about what is happening to their children. For example, children tend to treat Alexa or Siri in a somewhat condescending way after getting used to those systems. They will give a command or statement a question, and do so in a curt manner that they would be unlikely to do to an adult. Do this, do that, become strict orders to the system. There’s no please, there’s no thank you.

I realize you might argue that if the children did say please or thank you, it implies they are anthropomorphize the system.

Some worry though that this lack of politeness and courtesy is going to spillover in the child’s behavior such that it happens with other humans too. A child might begin to speak curtly and without courtesy to all humans, or maybe to certain humans that the child perceives in the same kind of role or class as the Alexa or Siri. I saw a child the other day giving orders to a waiter in a restaurant, as though the human waiter was no different than telling Alexa or Siri what is to be done.

For many of the automakers and tech firms, they are not yet doing any substantive focused work on the role of children in AI self-driving cars.

This niche is considered an “edge” problem, meaning that they are working on other core aspects, such as getting an AI self-driving car to properly drive the car, and so the aspects of children interacting with the AI self-driving car is far further down on the list of things to do. We consider it a vital aspect that will be integral to the success of AI self-driving cars, which we realize is hard to see as a valid aspect right now, but once AI self-driving cars become more prevalent, it’s a pretty good bet that people are going to wise up to the importance of children interacting with their AI self-driving cars.

For edge problems in AI self-driving cars, see my article: https://aitrends.com/ai-insider/edge-problems-core-true-self-driving-cars-achieving-last-mile/

For public trust of AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/

Conclusion

My duck won’t quack.

That’s something to keep in mind.

You might recast the no quacking idea and say that the (inadequately designed) AI self-driving car won’t talk (with children).

Restated, we need to have AI self-driving cars that can interact with children since children are going to be riding in AI self-driving cars, often without any adult in the self-driving car and without the possibility that an adult is readily otherwise reachable. I urge more of you out there to join us in doing research on how AI systems should best be established to interact with children in the context of a self-driving car.

The AI needs to be able to jointly figure out what’s best for the humans and the AI, perhaps helping to say the day, doing so in situations where children are the only ones around to communicate with.

And that’s no goo-goo ga-ga, I assure you.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Thursday, 28 November 2019

3 Reasons Your “Little” Data is a Big Deal

Big data is no longer some nascent trend riding a cycle of media hype.

It’s here to stay, and it’s transforming how businesses make decisions, build products, and engage customers. In fact, the ability to distill mountains of data into actionable insights has become a sustainable competitive advantage, separating the “Netflixes” from the “Blockbusters” in every industry.

But you knew that, right? Most industries are well-acquainted with big data’s disruptive power. What’s news to most is the fact we’ve neglected big data’s brother, “little data.”

In so doing, we’ve diluted the value big data offers our businesses. Fundamentally, big data isn’t about the amount of data captured. It’s not even about the new types of data you can collect. It’s about turning transaction-level information – which represents the most accurate observations of what’s really happening “on the ground” – into business insights to support decision making.

In this way, little data is the key to unlocking big data’s true potential. Granular data can be aggregated, shaped, and molded to answer any business question, and to reveal what’s truly driving your results.

Now, you might be wondering, if big data is really about little data, why don’t we talk about it in these terms? Why don’t we associate “big data” with its ...


Read More on Datafloq

The Top Seven Technology Trends for 2020

We have reached the end of 2019 and just like in previous years, I am looking ahead to see what organisations can expect next year. 2019 was the year of truth, with many enterprises developing blockchain proof of concepts, Google confirming a quantum supremacy breakthrough and more data breaches with the latest breach containing 1.2 billion records. Now for the 8th year in a row, I offer you my technology predictions for the next year, which I hope will help you prepare for 2020.

At the start of this century, 2020 still seemed so far away. To me, it felt that in 2020 we would live in a futuristic society, where we would fly from London to Sydney in a few hours, we would have flying cars, and the internet would be a place where people, organisations and machines would interact in a safe, private and secure way.

Unfortunately, none of this came true. Instead, we face numerous significant societal challenges including climate change, nationalism, fake news, an AI and quantum computing arms race, multiple data breaches and severe privacy violations by large centralised organisations. All these challenges give you plenty of reason to be very pessimistic about the future. However, I ...


Read More on Datafloq

Author of the Aadhaar story humbled by its global recognition

Besides technology upgrades and the use of AI, on the agenda is the expanded use of Aadhaar, said revenue secretary Ajay Bhushan Pandey, who is ET’s Policy Change Agent of the Year for his work at UIDAI.

Atos-Syntel sees faster growth in niche, emerging tech deals

The appetite for smaller, innovation led deals has been growing and competition has increased among mid-sized firms to capture a larger piece of the pie.

How to Gain Real Business Value from the Internet of Events

Most companies today rely heavily on analytics, and those that can effectively understand massive amounts of data can stay on top of their industry. In just a few years, our society shifted from “analog” to “digital.” Now, it is necessary to collect information on almost anything to get much-needed value for a business. Among all this data, probably the most important is event information or what we’ve come to know as the “Internet of Events” (IoE).

What Is IoE?

IoE refers to all available event data. For example, in a warehouse setting, it can refer to events that occur inside a conveyor belt or an automated vehicle. It can also refer to happenings inside a transportation system. That includes checking in to a flight or buying a ticket. Essentially, these events can be machine or life events or both.

IoE is a combination of:



Internet of Things (IoT): This refers to all tangible objects that can connect to a network. It includes devices that can be specifically identified in a network like Near-Field Communication (NFC)- or Radio Frequency Identification (RFID)-enabled gadgets. Examples of these are smartphones and any RFID tags such as those that restaurants use to tell you your order is ready. It ...


Read More on Datafloq

Migration from Hadoop to modern cloud platforms: The case for Hadoop alternatives

Companies rely on their big data and analytics platforms to support innovation and digital transformation strategies. However, many Hadoop users struggle with complexity, unscalable infrastructure, excessive maintenance overhead and overall, unrealized value. We help customers navigate their Hadoop migrations to modern cloud platforms such as Databricks and our partner products and solutions, and in this post, we’ll share what we’ve learned.

Challenges with Hadoop Architectures

Teams migrate from Hadoop for a variety of reasons. It’s often a combination of “push” and “pull”: limitations with existing Hadoop systems are pushing teams to explore Hadoop alternatives, and they’re also being pulled by the new possibilities enabled by modern cloud data architectures. While the architecture requirements vary for different teams, we’ve seen a number of common factors with customers looking to leave Hadoop.

  • Poor data reliability and scalability: A pharmaceutical company had data-scalability issues with its Hadoop clusters, which could not scale up for research projects or scale down to reduce costs. A consumer brand company was tired of its Hadoop jobs failing, leaving its data in limbo and impacting team productivity.
  • Time and resource costs: One retail company was experiencing excessive operational burdens given the time and headcount required to maintain, patch, and upgrade complicated Hadoop systems. A media start-up suffered reduced productivity because of the amount of time spent configuring its systems instead of getting work done for the business.
  • Blocked projects: A logistics company wanted to do more with its data, but the company’s Hadoop-based data platform couldn’t keep up with it’s business goals—the team could only process a sample of their imaging data, and they had advanced network computations that couldn’t be finished within a reasonable period of time. Another manufacturing company had data stuck in different silos, some in HPC clusters, other on Hadoop, which was hindering important deep learning projects for the business.

Beyond the technical challenges, we’ve also had customers raise concerns around the long term viability of the technology and the business stability of its vendors. Google, whose seminal 2004 paper on MapReduce underpinned the open-source development of Apache Hadoop, has stopped using MapReduce altogether, as tweeted by Google SVP Urs Hölzle: “… R.I.P. MapReduce. After having served us well since 2003, today we removed the remaining internal codebase for good…” These technology shifts are reflected by the consolidation and purchase activity in the space that Hadoop-focused vendors have seen.  This collection of concerns has inspired many companies to re-evaluate their Hadoop investments to see if the technology still meets their needs.

Shift toward Modern Cloud Data Platforms

Data platforms built for cloud-native use can deliver significant gains compared to legacy Hadoop environments, which “pull” companies into their cloud adoption. This also includes customers that have tried to use Hadoop in the cloud. Here are some results from a customer that migrated to Databricks from a cloud based Hadoop service.

  • Up to 50% performance improvement in data processing job runtime
  • 40% lower monthly infrastructure cost
  • 200% greater data processing throughput
  • Security environment credentials centralized across six global teams
  • Fifteen AI and ML initiatives unblocked and accelerated

Hadoop was not designed to run natively in cloud environments, and while cloud-based Hadoop services certainly have improvements compared to their on-premises counterparts, both still lag compared to modern data platforms architected to run natively in the cloud, in terms of both performance and their ability to address more sophisticated data use cases. On-premise Hadoop customers that we’ve worked with have seen improvements even greater than those noted above.

Managing Change: Hadoop to Cloud Migration Principles

While migrating to a modern cloud data platform can be daunting, the customers we’ve worked with often consider the prospect of staying with their existing solutions to be even worse. The pain of staying where they were was significantly worse than the costs of migrating. We’ve worked hard to streamline the migration process across various dimensions:

  • Managing Complexity and Scale: Metadata movement, Workload Migration, Data Migration
  • Manage Quality and Risk: Methodology, Project Plans, Timelines, Technology Mappings
  • Manage Cost and Time: Partners and Professional Services bringing experience and training

Future Proofing Your Cloud Analytics Projects

Cloud migration decisions are as much about business decisions as they are about technology. They force companies to take a hard look at what their current systems deliver, and evaluate what they need to achieve their goals, whether they’re measured in petabytes of data processed, customer insights uncovered, or business financial targets.

With clarity on these goals comes important technical details, such as mapping technology components from on-premises models to cloud models, evaluating cloud resource utilization and cost-to-performance, and structuring a migration project to minimize errors and risks. If you want to learn more, check out my on-demand webinar to explore cloud migration concepts, data modernization best practices, and migration product demos.

--

Try Databricks for free. Get started today.

The post Migration from Hadoop to modern cloud platforms: The case for Hadoop alternatives appeared first on Databricks.

Wednesday, 27 November 2019

MeitY planning strategy for national use of Blockchain

In a response to a question in the Lok Sabha, Sanjay Dhotre, minister of state for electronics and IT (MeitY), said that "Blockchain Technology as one of the important research areas having application potential in different domains such as Governance, Banking and Finance, Cyber Security and so on."

ScyllaDB Trends – How Users Deploy The Real-Time Big Data Database

ScyllaDB is an open-source distributed NoSQL data store, reimplemented from the popular Apache Cassandra database. Released just four years ago in 2015, Scylla has averaged over 220% year-over-year growth in popularity according to DB-Engines. We’ve heard a lot about this rising database from the DBA community and our users, and decided to become a sponsor for this years Scylla Summit to learn more about the deployment trends from its users. In this ScyllaDB Trends Report, we break down ScyllaDB cloud vs. on-premise deployments, most popular cloud providers, SQL and NoSQL databases used with ScyllaDB, most time-consuming management tasks, and why you should use ScyllaDB vs. Cassandra.


ScyllaDB vs. Cassandra
ScyllaDB Cloud vs. ScyllaDB On-Premises
Most Popular Cloud Providers for ScyllaDB
Databases Most Commonly Used with ScyllaDB
Most Time-Consuming ScyllaDB Management Tasks


ScyllaDB vs. Cassandra – Which Is Better?

Wondering which wide-column store to use for your deployments? While Cassandra is still the most popular, ScyllaDB is gaining fast as the 7th most popular wide column store according to DB-Engines. So what are some of the reasons why users would pick ScyllaDB vs. Cassandra?

ScyllaDB offers significantly lower latency which allows you to process a high volume of data with minimal delay. In fact, according to ScyllaDB’s performance benchmark report, their ...


Read More on Datafloq

How to Teach your Anomaly Detection System to Correlate Abnormal Behavior

Abnormal data trends rarely occur on their own. Influencing or related metrics are usually involved. For example, let’s say that a remote data center goes offline and doesn’t come back up. The anomaly in this case isn’t just a power failure, it’s a power failure plus a failure in a backup generator. 

Some systems might show you one of these anomalies, leaving you to search for other affected metrics which can take hours, days or even weeks. Correlation, on the other hand, instantly lists related anomalies so you can quickly and painlessly understand what the leading dimension is and which metrics are impacted.

If you don’t use anomaly detection, you won’t understand the cause of your outage until a support crew reaches the site. With anomaly detection, however, you can quickly discover both related anomalies, making it that much easier for you to get back online. 

Finding related metrics and anomalies

Behavioral topology learning provides a method for data scientists to understand relationships between millions of metrics at scale. This lets them combine related anomalies into stories, mitigate errors and examine their root causes. When implemented correctly, this system can filter out unrelated metrics from the results for greater accuracy.

There are several methods within ...


Read More on Datafloq

ISRO launches CARTOSAT 3, US satellites into space

The launch will enhance India's ability in high-resolution imaging and also reinforce India as a global launch destination for small satellites using its workhorse rocket Polar Satellite Launch Vehicle.

Survey Download – AI in Drug Development

  • Your Contact Information

Tuesday, 26 November 2019

Brickster Spotlight: Meet Alex

Are Advertisers Using Your Streaming Data Ethically?

Modern consumers face a conundrum. How can they balance the benefits of technology with the risks of their sensitive data falling into the wrong hands?

We are now in the era of information. Data is raising the interest of business leaders – and the concern of citizens.

Every minute, over 70,000 transactions and nearly 4 million Google searches take place. Meanwhile, massive data security debacles – such as the Cambridge Analytica scandal – have left many consumers worried about the safety of their sensitive information.

The Straw That Broke the Camel’s Back

The 2018 Cambridge Analytica scandal shined a revealing and unsettling light on the lax data security practices of major corporations. News of the data breach first emerged through the combined investigative journalism of the New York Times, the Guardian and the London Observer in May and April of that year.

The investigations revealed that a researcher purchased the private information of Facebook users from Cambridge Analytica. At the time, the professor claimed he was using the information for an academic study. However, journalists later revealed that this was anything but the case. Instead, the information was used to manipulate the ongoing presidential election in the United States.

Astoundingly, only 305,000 people consented to the ...


Read More on Datafloq

Google’s BERT changing the NLP Landscape

We write a lot about open problems in Natural Language Processing. We complain a lot when working on NLP projects. We pick on inaccuracies and blatant errors of different models. But what we need to admit is that NLP has already changed and new models have solved the problems that may still linger in our memory. One of such drastic developments is the launch of Google’s Bidirectional Encoder Representations from Transformers, or BERT model — the model that is called the best NLP model ever based on its superior performance over a wide variety of tasks.

When Google researchers presented a deep bidirectional Transformer model that addresses 11 NLP tasks and surpassed even human performance in the challenging area of question answering, it was seen as a game-changer in NLP/NLU.




BERT comes in two sizes: BERT BASE, comparable to the OpenAI Transformer and BERT LARGE — the model which is responsible for all the striking results.
BERT is huge, with 24 Transformer blocks, 1024 hidden layers, and 340M parameters.
BERT is pre-trained on 40 epochs over a 3.3 billion word corpus, including BooksCorpus (800 million words) and English Wikipedia (2.5 billion words).
BERT runs on 16 TPU pods for training.
As input, BERT takes a sequence of words which keep flowing up the stack. ...


Read More on Datafloq

Seat Belts and Safety Restraints for AI Autonomous Cars

By Lance Eliot, the AI Trends Insider

Seat belts.

Some people love them and feel reassured to be wearing one while inside a moving car.

Others hate them and feel trapped, at times even trying to find clever ways to avoid wearing them.

Most of us now know that as a driver, you ought to be wearing your seatbelt, and for which modern day cars will usually put up quite a holler if you aren’t wearing one.

Grandparents Opinions About Seat Belts

I remember my grandparents telling me when I was young that they normally did not wear a seat belt. They indicated that when Congress passed the Federal Motor Vehicle Safety law in the mid-1960’s making seat belts mandatory in cars, they nearly went to Washington DC to protest. Indeed, they defiantly refused to wear seat belts at all, and made a proud show of disdain for seat belts every time they got into a car.

They had experienced primarily lap belts for most of their driving life.

I vividly recall them buckling all of the lap belts that were in the car and telling any passengers to sit atop the buckled seat belt. Buckling the seat belts was partially to satisfy the system that later on was able to detect whether a seat belt was fastened or not, and they also did it to overtly carry out their protest. If the seat belts were unbuckled, it could be that you didn’t even realize they were there. For them, by buckling the seat belts, and purposely sitting on top of them, it was an indicator that you knew what you are doing and did so for a reason.

The 3-point belt standard got them further riled up.

The early versions were someone confusing to use and this added more fuel to the fire as to why they were necessary and seemed primarily to be there as an annoyance. How do these darned things work, they would exclaim? Stupidly designed, not needed, abhorrent, they would say.

It really galled them when seat belts were added to the backseat of cars.

They somewhat begrudgingly could understand the basis for putting seat belts in the front seats of a car, particularly for the driver, but it was unfathomable to put them into the backseat. Who in the world would want or need to wear seat belts while in the backseat? It became an open secret that they were convinced that the seat belt manufacturers had a monopoly going on and had in a conspiratorial manner kowtowed the government into keeping them in business by insisting on seat belts throughout a car.

At times, my beloved grandparents would sit in the back seat of my car, once I was older and able to drive, and would explain to me that the reason they did not need to wear a seat belt in the backseat was that they were strong enough to hang-on if anything happened while the car was in motion. This became a test of their personal strength as elders, suggesting that if they were too weak to hang onto an armrest or steady themselves if the car swerved, it somehow implied they were weak of mind, body, and spirit.

In their later years, air bags were just coming to fruition, which was mainly during the 1970’s and the 1980’s.

I almost shouldn’t tell you what they thought of air bags. The good news is that at first they thought airbags were the best thing since sliced bread. In their minds, having air bags meant you for sure did not need to wear your seat belt. This was of course in contrast to the clearly indicated aspect that you were supposed to wear your seat belt and that the airbag was merely a supplemental form of safety restraint.

Nope, for them, the airbag was the kiss-of-death to the need for seat belts. They assumed that the air bag would save them in any kind of incident. They weren’t sure how the contraptions worked and figured it was a compressed down-filled pillow that somehow expanded and gave your head and body a soft place to land. I suppose you could somewhat excuse them on this misunderstanding, given that the auto makers initially referred to the airbag as an Air Cushion Restraint System (ACRS), which sounds kind of like a pillow, I suppose.

My grandparents weren’t around when air bags began to become not only standard but also you’d have several of them, perhaps six or more, outfitted into a conventional car. I’d dare say that they’d be once again threatening to protest in Washington DC about this proliferation of air bags.

Why, you might ask, since they had initially hailed the invention of airbags?

Airbags First Deployed

You might not know that when airbags were first being deployed, even some of the major automakers opposed them, including Ford and GM, doing so under the claim that airbags were inappropriate and lacked consumer demand.

There were also rumors of air bags that suddenly deployed on their own and either scared the dickens out of the driver or caused the driver to lose control of the car. There were rumors that the air bags would injure you upon deployment and you were at as much risk in not more so of death from the airbag as you would be from the car getting into a collision.

Though some of those rumors were baseless, there is still to this day controversy associated with air bags.

The National Highway Traffic Safety Administration (NHTSA) did a study in the early 2000’s and found that over a twelve-year period there were over 200 deaths due to airbags. The deaths occurred at low speeds and the NHTSA said that the airbag was the more likely culprit of the death due to the underlying incident that led to the deployment of the air bag.

For higher speeds, they could not readily differentiate what led to a death and so tended to not attribute deaths to the air bags when the incident involved high speeds.

Getting back to the topic of seat belts, I’m happy to say that my children are quite inured to the use of seat belts.

They would not even consider driving a car without wearing seat belts and will nearly always urge others in their cars to be wearing their seat belts. Of course, you could say that they’ve grown-up with seat belts and it was the constant drumbeat during their generation that aided them in becoming seat belt advocates. My grandparents had obviously grown-up in a different era and perceived seat belts in a radically different way.

Seat Belts Aspects

One aspect that I never could fully grasp about my grandparents’ concerns was that they insisted the seat belt would hamper the driving of a car.

I don’t know about you, but I’ve never had a situation that I thought my seat belt got in the way of my driving the car. Certainly, I’ve had a number of moments when the belt tension mounted and became uncomfortable taut due to a rapid maneuver. I didn’t find myself though unable to maintain control of the driving task. You might suggest it helped me maintain control since I would have otherwise likely had my body and limbs flail around.

Another qualm they had was that the seat belt could end-up killing you.

Have you heard that one before?

You might be puzzled about killer seat belts.

Let me enlighten you.

My grandparents asserted that you might get into a car accident and not be able to escape from the car. The seat belt would keep you pinned into the wrecked car. They were convinced that the chances of you getting stuck inside a burning car was far more likely than whatever other protection or safety that the seat belt offered. In a risk-reward equation, they argued that putting on a seat belt was like a death sentence, meaning that if the car got into an accident you would be a dead person. They, meanwhile, free of the seat belt as a restraint, would walk or crawl out of the burning car and survive.

What amplified their claim were movies and TV shows that would vividly depict a car plunging dramatically into a body of water such as an ocean or a lake, and the person inside the car would be trapped by their seat belt and drown (a surprisingly frequent plot device in murder mysteries and James Bond spy stories).

Living near an ocean here in California seemingly put me at heightened risk, and when they realized that we also have parks with small lakes, well, this was enough to get them to plead with me to not wear my seat belt. I bought one of those seat belts cutting tools, which also doubled as a tool to break the glass of the windshield, hoping this would ease their concerns. It did not.

By the way, I’ve never managed to drive my car into the ocean or into a lake, and nor even into a swimming pool. I might be living on borrowed time. Yes, I still keep the handy escape tool in my car, as per the lessons of life handed down to me by my grandparents.

I trust that you know that the odds of seat belts “killing” you are exceedingly remote. I won’t say it is zero odds, which maybe you might assume, at the same time I think using whatever the tiny rate might be as a basis for overlooking the safety of wearing the seat belt, well, it’s an easy calculation. Wear your seat belt.

Seat Belts And The Law

In California, it is the law that you must wear your seat belt.

And it is not just the driver that must wear a seat belt.

The law here is that the driver and all passengers in a car that are over the age of 8 or are children sized 4 feet 9 inches tall or taller must all be wearing seat belts. This encompasses whenever you are driving on public roads. It also applies to private property, including parking lots and in other circumstances. Children that are younger than the stated threshold or smaller than the stated threshold must be in federally-approved child car seat restraints.

In some states, the police can stop you if you are not wearing a seatbelt, allowing them to share with you the importance of wearing a seatbelt and you can get a ticket for not having worn it. Other states don’t permit the police to stop you for not wearing seat belts, considering it not serious enough to warrant a police stop, though if you are stopped for some other valid reason, the police can then cite you for the lack of wearing seat belts.

I know one couple that was visiting here in California and got a bit upset when they had gently rolled through a stop sign and the police stopped them (they acknowledged this was wrong to do), and subsequently the police noticed that their teenage children in the back seat were not wearing seat belts and cited the driver for each of those offenses.

The driver was a bit steamed and argued that they should not be held responsible for what the teenagers in the backseat were doing.

What was he supposed to have done?

Well, here in California, any passenger under the age of 16 that is not wearing a seat belt will mean that the driver gets the blame (or, more properly stated, has the responsibility, which the driver should have duly exercised and made sure the teens were wearing seat belts, akin to being the captain of a ship).

When I was in college, I had a friend that drove a beat-up old jalopy of a car.

It was in pretty bad shape.

He had seat belts in the car.

These were the most dilapidated seat belts I had ever seen. He said he couldn’t afford to replace them.

As frayed as these seat belts were, I was sure that in an accident they would immediately be torn apart at their loose ends and the belts would not hold in anyone or anything of any substantive weight or size. He was lucky that he never got stopped by the police. If a police officer had seen those seat belts, it would have been tickets galore, since it’s another law here that your seat belts must be in proper working order. He was doubly lucky that he never got into an actual incident severe enough to require the seat belts to work as expected.

In case this discussion about seat belts has not been clear cut, let me point out that seat belts will generally increase your chances of surviving car crashes or other such incidents, at least more so than if you aren’t wearing the seat belts. Of course, the seat belts need to be in proper shape. You also need to wear them the right way. If you goof around and aren’t wearing the seat belts as intended, you are defeating their purpose, plus it can cause you added injury during an incident, such as to your spine or torso.

For the driver of a car, the seat belt can aid them in being able to retain control of the driving task.

Whereas they might be tossed around wildly without seat belts, the intent is that you’ll stay pretty much in place and be able to therefore continue to access the steering wheel and the pedals. This staying in place might allow you to drive your way out of whatever predicament is taking place or at least perhaps be able to more safely consider other drivable options. It can also aid in reducing the severity of the whipping motions and might prevent your body from otherwise damage that it could sustain due to the incident by not wearing seat belts.

Front seat passengers can also gain advantages by wearing their seat belts.

A front seat passenger that is not wearing a seat belt can inadvertently get tossed into the driver of the car, causing the driver to lose control of the car or perhaps leading to the driver hitting the gas or steering radically when they didn’t intend to do so. The front seat passenger could get launched through the front windshield in a severe impact of the car hitting another car or ramming into something, which would likely not occur if the person was wearing a seat belt.

The backseat seat belts are for many people less essential, since they assume that anyone in the backseat will somehow be magically okay in a car incident. Those people falsely think that whatever happens in the backseat won’t affect the front seat and the driving. Little do they seem to know that a person flying around in a backseat can readily push into the back of a front seat, causing havoc to the driver sitting in the front seat. It is even possible for the person in the backseat to go flying up into the front seat of the car, or possibly get launched through the front windshield.

There is also the aspect that while in the backseat, you likely don’t want to be flailing around loosely, even if you don’t happen to knock into the front seat of the car. Imagine if say two people are sitting in the backseat. There’s a rapid car movement which is going to cause a hard braking or swerving of the car. These two people can be tossed around like rag dolls and hit each other, including causing broken bones or worse. If they had been wearing their seat belts, they might have still bumped somewhat into each other, but the likely injuries are going to be less severe.

Ridesharing Cars And Seat Belts

Some recent studies about the wearing of seat belts has indicated that people often do not wear the backseat seat belts when they get into a ridesharing car. These are people that would normally wear their seat belts in the backseat of a car of someone that they knew. But, when they get into a ridesharing car, they often do not put on the backseat seat belts.

The news media seemed puzzled by this matter.

I think it seems quite obvious. When you get into a ridesharing car, you are likely going to take a very short trip, and you are assuming that the chances of a car incident occurring are low, due to the short distance and time involved in the travel. You are also likely used to getting into vehicles such as buses that often do not have seat belts.

There is also the matter of figuring out the seat belts in a car that you are not familiar with. This might seem ridiculous since all seat belts are pretty much the same. All I can say is that I’ve seen people struggle mightily with the seat belts while in the backseat of cars. They mentally might be calculating that the amount of time required to figure out the seat belt and put it on exceeds the amount of time they are going to be in the ridesharing car.

Some might also ascribe a level of proficiency to the ridesharing driver. These passengers might believe that the ridesharing driver is a professional driver and so less likely to get involved in an incident, perhaps more so than would someone that they know as a friend or colleague. I’m not going to put too much stock into that part of the theory. There are some that go the opposite direction on this belief in the sense that they assume that since the ridesharing driver is on the road so much of the time, the odds are they are going to ultimately get into incidents that others that don’t spend as much time on the road will not.

In any case, the recent spate of news coverage has brought to the forefront a number of popular YouTube videos that showcase what can happen when you are in a ridesharing car that gets into an incident. If you want to scare yourself into forever putting on your seat belt while a passenger, go ahead and watch one of those videos. Maybe get others that you know to watch the videos too, since it will heighten their understanding of how a human being can become an unguided missile while sitting in the back of a car.

I don’t want to sound like some kind of seat belt crazed advocate.

I am not suggesting that seat belts are a cure all.

They are a reasoned form of safety that has trade-offs.

Yes, bad things can happen while wearing a seatbelt. Likewise, bad things can happen with air bags. For now, these bad things are generally outweighed by the good things that these safety features provide. It is not an absolute. It is the calculation of risk-reward and for which the seat belt is more of a plus than a negative.

In the case of air bags, there’s not much choice per se that you have about those. If your car is equipped with them, they are either going to deploy or not, and either help you or not. Some cars have plenty of them, some cars have very few or none at all (in the USA, cars sold after 1998 became required to be equipped with air bags for the front seats of the car). Some cars also allow you to disable the airbags. Overall, the airbags “choice” for a driver or passenger is not particularly a choice as much as it is a given, while the use of seat belts is more so a type of choice, one might argue.

AI Autonomous Cars And Seat Belts

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect to consider is the nature and use of safety restraints such as seat belts and air bags. I often get asked whether or not we’ll still have such constraints in a world of AI self-driving cars.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars.

The topmost level is considered Level 5, and a constrained variant is Level 4.

A true self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of true self-driving cars, the automakers are even removing the gas pedal, the brake pedal, and steering wheel, since those are contraptions used by human drivers. The true self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For semi-autonomous cars, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of safety restraints, let’s consider what might happen as a result of the advent of AI self-driving cars.

First, let’s consider cars that are semi-autonomous.

Those semi-autonomous cars will need a human licensed driver at the wheel and the human driver must be ready to take over the driving task when needed. In many respects, this is no different than being in a conventional car in the sense that the human driver should be snugly in their driver’s seat and be kept in place via a seat belt, along with having air bags at the ready.

There is a bit of a twist though.

One issue that I’ve repeatedly brought up about semi-autonomous cars is that the human driver is likely to become disjointed from the driving task.

As the automation gets better and better, there is a tendency for a human driver to become increasingly careless and aloof of the driving effort. This is a highly dangerous circumstance since the odds are that the human driver will be needed immediately if the AI opts to handover the driving to the human driver, and yet the human driver might be mentally adrift of the driving situation.

In addition to being mentally adrift, there is a high chance that the human driver will be physically adrift too.

You’ve perhaps seen videos of human drivers reading a book, texting on their smartphones, and otherwise physically having their limbs away from the controls of the car. These human drivers often will also shift their body and be angled towards the front seat passenger or perhaps be turned slightly backwards to look at the passengers in the backseat.

Whereas in a conventional car the preponderance of drivers tend to realize that they need to keep their body and limbs within close proximity of the driving controls, the temptation of the heightened automation will prompt them to be adrift of the car pedals and steering wheel.

Semi-Autonomous Cars And Seat Belt Use

This raises the question of whether or not we’ll potentially see more human drivers that will seek to avoid their seat belts or misuse the seat belt by not remaining in place.

I would anticipate that we’ll see a lot of human drivers that allow themselves to get into such a predicament.

They might upon initial foray into using a semi-autonomous car be closely attentive to remain well-connected with the driving controls, but over time, inexorably, if nothing adverse seems to occur, they will stretch out their physical boundaries.

For seat belts, it means that the seat belt will not be as protective as normally expected.

Is this due to a design issue of the seat belt or is it due to the “misuse” by the human driver?

You could try to place all the blame and responsibility onto the inattentive human driver.

That’s the easy way out.

I’d like to suggest that we instead consider how to use automation and the AI to help on the matter.

There is already a move afoot to use cameras that are inward facing to watch the human driver and alert them when they are no longer looking ahead at the roadway. These cameras can also scan the position of the eyes of the human driver, thus, their head needs to be straight ahead, and their eyes need to be looking ahead too. There are new devices on steering wheels to detect whether the driver has their hands on the wheel and if they are away from it too long the steering wheel lights up or a sound or some alert warns them about this.

We can use the AI to bring together an array of sensory data about the human driver and use it in a coordinated manner to have the AI discuss with the human driver the need to remain involved in the driving efforts. Rather than merely a series of beeps and lights that go on, it would be handy to leverage the socio-behavioral Natural Language Processing (NLP) capabilities of the on-board AI system to inspire the human driver and keep them engaged in the driving task.

For socio-behavioral conversation computing, see my article: https://www.aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For AI NLP and self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For more about steering wheels, see my article: https://www.aitrends.com/selfdrivingcars/steering-wheel-gets-self-driving-car-attention-ai/

For my article about the responsibility aspects, see: https://www.aitrends.com/selfdrivingcars/responsibility-and-ai-self-driving-cars/

The seat belt and the physical position of the human driver can also be scanned via the inward facing cameras. This would allow the AI to further determine how far away from the driving controls the human driver is becoming, along with whether the human driver is putting themselves into greater danger by not being in the proper placement for the seat belt to work as intended.

As an aside, this same use of the camera data can be helpful in ensuring that the human driver remains well-positioned related to the air bags that are nearby to the driver’s position. Many people seem completely unaware that they can be dangerously harmed if they are too close to an airbag when it deploys. The recommended distance is about 10 inches away from the inflation point, which is usually easily found at the location of the airbag cover. This distance could be monitored by the AI system and the human driver notified when they have become to close (and also warn when they are to distant).

Smart Seat Belts

There are numerous research efforts underway to create the next-generation of seat belts, which some are referring to as Smart Seat Belts (SSB’s).

These advanced versions of seat belts have embedded sensors.

Those sensors would become another form of data collection for the AI system and allow it to ascertain the placement related to the driver. In this manner, not only would there be the visual data from the inward facing camera, but in addition there would be data coming from the seat belt itself.

It is anticipated that the SSB’s would provide a kind of customization for the wearer of the seat belt.

The seat belt would contain elements that could allow it to stretch and extend, or tighten and become more fitting, depending upon the size and weight of the human driver. Hopefully, this would act as another form of encouragement for the human driver to make sure they are wearing their seat belt and doing so in the appropriate manner.

For the dangers of Level 3 and human driver inattentiveness, see my article: https://www.aitrends.com/selfdrivingcars/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the bifurcation of autonomy, see my article: https://www.aitrends.com/selfdrivingcars/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

For my article about safety aspects, see: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

For the reaction time Human Factors issues, see my article: https://www.aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

For semi-autonomous cars, I’ve emphasized herein that the human driver must remain involved and aware of the driving task, and that the use of the seat belt is crucial in that effort.

The AI can help to keep track of the position of the human driver and perhaps by using NLP talk the human driver into being more compliant. I say this because I’d bet that beeps and lights would not be quite as effective as a disembodied AI savvy voice that politely and yet sternly acts as a reminder of the right thing to be doing.

What about the passengers in semi-autonomous cars?

I’ll extend my remarks about the driver to suggest that we can have the AI be detecting the positions of the passengers too, and once again trying to inspire them to also properly wear their seat belts. The inward facing camera will readily be able to see the front seat passengers, and it is likely that the camera would also be able to see the backseat passengers, or there might be additional cameras throughout the interior to make sure that the backseat passengers can also be seen.

Perhaps the parent in the driver’s seat will no longer need to be “the tough parent” and have to ask or insist that those teens in the backseat put on their seat belts.

The AI can take on that role.

The parent would either shrug their shoulders and say it’s the AI way or no way, or the parent would hopefully acknowledge the helpfulness of the AI in providing a handy reminder about the importance of wearing seat belts. One does have to have sympathy that any hectic parent can easily neglect to keep in mind the importance of seat belts. I’ve pointed out earlier that this can be a serious omission of ensuring that that those backseat passengers aren’t going to become flying projectiles.

True Self-Driving Cars And Seat Belts

Let’s now turn our attention to true self-driving cars.

In a true self-driving car, there is no need to have a human driver present. If one happens to be in the true self-driving car, it is not especially noteworthy since there are unlikely to be any driving controls inside the true self-driving car anyway. As such, all those humans inside the true self-driving car are passengers, regardless of whether they are adults or children, and regardless of any kind of driving prowess they each might have.

Here’s where things get really tricky.

It has been predicted that the interior of AI self-driving cars will be radically different than the interior of today’s conventional cars. One of the main reasons to redo the interior is that there is no longer a need to have the driving controls, which normally takes up a chunk of space at the front of the interior. Likewise, there is no need to have a driver’s seat.

The car interior now becomes a freed-up space that can be used for whatever you want it to be used for.

Some automakers are possibly going to be put swivel seats that allow passengers to face each other, or not, as they might wish to swivel back-and-forth during a driving journey. There are automakers that are going to be putting recliners into the car, or perhaps even beds. The thinking is that people will start using their cars to take them on longer trips and will want to sleep in their car. Or, maybe during their morning commute to work they might want to take a brief nap.

This is exciting and will utterly change our perception of the use of a car interior.

I’ll bring us all back to earth and point out that whatever you do in this interior space, you still need to have safety restraints.

Sorry about that, I hope this didn’t burst anyone’s bubble.

I’ve seen some concept designs of car interiors that omit entirely any kind of seat belts. I know that a concept design is supposed to look sleek and sexy, but I have a bit of concern about not showing the seat belts. You might say it’s a small omission and not worthy of noting. I guess we’ll disagree on that point. I don’t want people having false expectations that they will now be suddenly rid of seat belts.

Indeed, when I speak at AI industry conferences, it does seem that a lot of people falsely believe that there will be no need for any safety restraints, and certainly not seat belts.

Why is this?

I’ll start with the point that makes me aghast. I have some people that say there will never be any car accidents once we have all AI self-driving cars.

This is some magical notion that the AI self-driving cars will all carefully coordinate their actions and we’ll never again have any kind of car collisions.

This is some wild kind of dreamworld that these people have bought into.

The first aspect is that there will be an ongoing mix of both AI self-driving cars and human driven cars for quite a while, maybe forever, and thus there is not going to be this Utopian world of solely AI self-driving cars (as I had mentioned earlier herein). You had best face the facts. And, in that case, we are going to have car collisions and impacts with other cars, presumably mainly AI self-driving cars and human driven cars making adverse contact with each other.

Even if we somehow removed all human driven cars from the roadways, and we had only AI self-driving cars, explain to me what an AI self-driving car is going to do when a dog rushes out into the street from behind a large tree. The physics of the situation are going to be that the AI self-driving car will need to hit its brakes. For those of you that counter-argue that the AI should have detected the dog beforehand, I defy you to be able to offer any means by which all such “surprises” will be eliminated from the world of driving as know it. A dog hidden behind a tree is not something that can be so readily detected.

This brings up another point about being inside a car. You are not wearing seat belts only because of car accidents. Whenever the driver of a car has to hit the brakes, or maybe take a curve very fast, or perform other such maneuvers, the humans inside the car are going to be tossed back-and-forth. The seat belts are used for that safety purpose too. It’s not just when there is an actual car accident that the seat belts come to play.

You might not give much thought to all that your seat belt does for you in any given driving journey. I would wager that if we put a sensor onto your seat belt and it kept track of how many times it helped restrain you, for just your daily commute to work, we’d likely see that the seat belt is a silent but crucial form of safety for you.

How will safety belts function when you are in a swivel seat of your fancy new AI self-driving car?

Do we need a new kind of seat belt?

Will people be upset that their seat belts restrain them, which maybe right now they don’t notice as much, but when they are wanting to move around in those swivel seats it could become a more apparent matter.

I’d also guess that people will be tempted to take off their seat belts.

Currently, in a conventional car interior, if you take off your seat belt, there’s not much else you can do anyway, and so why bother to remove it. In the case of the car interior for AI self-driving cars, you might be using the interior to play games or do something else that you’d prefer to be freed-up and not have to remain in your seated position and restrained by the seat belt.

You can also imagine the difficulty now of having air bags inside the self-driving car.

Where will the human be and what is their position?

Will the airbag deploy in a manner that befits the position of the human passenger?

With a conventional car, you are pretty much guaranteed where the humans will be seated. This makes it relatively easy to position the air bags.

In a true self-driving car, the humans will have additional flexibility in terms of where they will be seated, their angle, their pitch, and so on. This becomes a kind of moving target and not easy to make sure an airbag will be of help to them when needed (and not a danger either).

If the humans are reclining, we once again need to identify what kind of seat belt can aid them. The same is the case with full-on prone position for sleeping inside a moving car.

I suppose you might right away be saying that trains allow you to sleep on-board and you don’t need to wear a seat belt. Cruise ships let you sleep without a seat belt. Airplanes generally let you sleep without a seat belt or they tell you to go ahead and keep it loosely around you, and also be ready to awaken if prompted and then sit up and make sure your seat belt is properly on you.

Of course, the answer is that a moving car is not the same as an airplane, nor the same as a cruise ship, nor the same as train.

I hope you see that’s apparent. Its closest cousin would be a bus. Most overnight buses admittedly let you sleep on-board without any specialized seat belt, though many would say this is a loophole in the rules and endangers people. Perhaps the low number of miles driven while sleeping in a bus is little enough that no one wants to hassle it, plus, the thinking is that a bus is big and not perhaps as prone to coming to sudden stops or getting into accidents.

The vast number of miles that people will be going in their self-driving cars, along with the aspect that these are cars, meaning they are relatively smaller than a bus and more likely to have severe consequences to the occupants when something untoward happens, it all adds up that we’ll need to have specialized seat belts for dealing with being inside these self-driving cars. And, we’ll need to encourage passengers to use those seat belts.

It is anticipated that AI self-driving cars will become the mainstay of the ridesharing cars. If that’s the case, should the AI act as an “enforcer” and get those that are riding in a ridesharing car to put on their seat belts and keep those seat belts on?

I mention this because there won’t be a human driver in the ridesharing car that could suggest it to the passengers.

It would seem to be up to the AI to do something about this.

For more about ridesharing and the advent of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/

For my article about taking family trips in AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/family-road-trip-and-ai-self-driving-cars/

For the productivity elements of being inside an AI self-driving car, see my article: https://www.aitrends.com/selfdrivingcars/productivity-gains-or-losses-via-ai-self-driving-cars-the-inside-view/

For the non-stop use of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/non-stop-ai-self-driving-cars-truths-and-consequences/

AI Interaction And Seat Belts

I realize that when I suggest that the AI should be informing passengers about their seat belts that it has a kind of creepiness factor to it.

Imagine that you get into a ridesharing car with your friends. You are sitting in swivel seats and having a good time, including drinking, which assuming you are legally able to drink, and since none of you is driving, you can readily go ahead and party in that AI self-driving car. Get lit, as they say. The AI at the start of the driving trip tells you all to put on your seat belts. You all comply.

During the driving journey, and perhaps after getting a bit tipsy, some of you decide to remove your seat belts.

The AI is likely going to be able to detect this. The sensors in the seat belt will inform the AI. The inward facing cameras will be continually scanned by the AI and it will be able to visually detect that the seat belts have been removed. The AI speaks up and tells you all that it is important that you keep your seat belts intact.

Creepy?

I guess so.

If I say it’s for your own protection, does that help?

Anyway, let’s move past the privacy issues that this raises, which I’ve covered elsewhere, and focus again on the AI aspects and the use of seat belts.

For more about privacy aspects, see my article: https://www.aitrends.com/selfdrivingcars/privacy-ai-self-driving-cars/

Suppose that in their drunken state, these passengers refuse to put back on their seat belts.

Since this is a ridesharing car, if the self-driving car makes a sudden stop, doing so lawfully to save the passengers, but if the passengers than go flying around the interior, who is responsible? Is it the ridesharing service because they did not enforce the use of seat belts? Is it the miscreants that opted to refuse compliance with the AI urged use of the seat belts?

I am betting that a lawyer will be happy to go after the ridesharing service. All told, I’d guess that we’ll eventually need to decide as a society what we want done about these kinds of situations. The easiest “solution” is that if the AI detects the seat belts are not being used, or being used improperly, and if the passengers won’t comply, the AI would indicate it is bringing the ride to a gradual halt and will pull over to the side of the road.

This is not an entirely satisfactory solution, as you can imagine, because if the passengers are drunk, and suppose they then get out of the ridesharing car and get injured, because they were brought to a halt at the side of the road, who is to blame for that?

I’ll remove the drunkenness from the equation, since I don’t want you to get mired into thinking that’s the only situation involving this kind of dilemma. You put your children into a self-driving car and tell it to take them to school. When you got them into the AI self-driving car, they all dutifully put on their seat belts. Away the AI self-driving car goes.

Once it gets not even down the block, the kids all take off their seat belts. They want to roam around inside the AI self-driving car and have some fun. No stinking seat belts for them. The AI detects that the kids have removed their seat belts. It gives them a stern warning. They laugh and mock the AI.

What now?

It could be that the AI tries to dial-up the parents and get them onto an interior display screen, showing them via the camera the crazed children playing around inside the AI self-driving car. I’m assuming those kids are going to be in deep trouble that night when they get back home. This “solution” might not be viable either due to the potential that the AI cannot reach the parents, perhaps due to electronic communications blockages or the parents are not available, etc.

You might say that the AI should just immediately turn around and take the rebelling children back to their home. This is not a good solution either because they are now presumably not wearing their seat belts, and for whatever distance the AI needs to drive back home, those kids are all in danger because they are not wearing their seat belts. Plus, it could be that the parent has already left the home and the AI would be taking those kinds back to an empty house anyway.

The whole topic about children being inside a self-driving car without any adult supervision is one that we as a society have not yet broached.

There are lots of other issues that can arise and there is a slew of questions yet to be asked and resolved. I realize that you might insist that we don’t let children ride alone in an AI self-driving car, but I’d say that will either be a proposed law that no one will agree to, or a law that many will break. It is going to be very tempting to use the AI self-driving car as a means to transport your children to school, and to football practice, and to the dentist, and so on, doing so on your behalf and without having to have an adult present in the self-driving car.

I’ve predicted that this will likely lead to a new job or role in our society, namely being the position of an AI self-driving car ride-along adult that can supervise children that are in the AI self-driving car. It requires no driving ability and nor license to drive. It is in a sense a nanny-like role. I can envision ridesharing services that will try to differentiate themselves from other ridesharing services by providing these “in-car nanny services” for an added fee when you use their ridesharing self-driving cars.

For my article about the jobs of the future related to AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/future-jobs-and-ai-self-driving-cars/

For my Top 10 predictions about AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/

For the aspect of Gen Z and the advent of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

For my article about the dangers of kids being in moving AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/shiggy-challenge-and-dangers-of-an-in-motion-ai-self-driving-car/

Conclusion

I remember when I was a child that my parents would sometimes turn around from the front seat of the car and loudly tell me and my siblings that we better stop messing around or we’d be in a lot of trouble. That usually worked, and we settled down. At least for a few minutes.

For AI self-driving cars, the use of seat belts will still be crucial and amount to pretty much the same as today’s conventional cars. There will likely though be human drivers and passengers that might become complacent when in a semi-autonomous car, and falsely believe they can either remove their seat belt or wiggle around it. The AI can likely detect this and act as a kind of seat belt cop.

When we get to the true self-driving cars, the good news is that there is no longer a human driver that needs to be properly seat belted in. The bad news is that the passengers are bound to want to move around and have freedom within the moving car. Wearing a seat belt won’t be the top of their list of things to do while in an AI self-driving car. Plus, with the variations in car interiors, the odds are that having conventional seat belts won’t cut the mustard and we’ll need other approaches to be invented or brought to the marketplace.

The toughest aspect about the true self-driving cars involves having unattended children in the self-driving car. In theory, if an adult is present, you can hold the adult responsible for making sure that everyone on the self-driving car is properly wearing their seat belt at all times. Without an adult, what are we to do? The AI can certainly detect the tomfoolery, but it is not readily going to be able to enforce the seat belt policy per se.

There are lots of catchy sayings that have evolved around wearing seat belts.

Click it or ticket.

Confucius says wear your seat belt.

No safety, know pain.

Seat belts save lives, buckle up every time.

We’ll need to come up with some new slogans for the advent of AI self-driving cars.

AI says wear your seat belt.

No seat belts, AI no go.

The AI says, don’t forget to fasten your seat belt.

Well, I’m sure that someone can come up with something catchier than those potential tags.

The real work is going to be solving the seat belt “problem” and leveraging the AI to aid in saving people by getting them to wear their seat belts.

That’s a worthy catchphrase.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]