Saving lives with deep learning, creating smarter chatbots and more: 10 takeaways from ML Conference 2019

Share
  • December 20, 2019

ML Conference 2019 took place from December 9-11, 2019, in Berlin. After a thought-provoking opening keynote on robot ethics by Dr. Janina Loh, attendees could choose between talks from a wide range of machine learning topics—by dozens of experts from all over the world.

Read our key takeaways if you couldn’t make it to all the talks you wanted to see.

Takeaway #1: The moral of the story

What do robots have to do with ethics? With this question, Dr. Janina Loh (University of Vienna) opened Machine Learning Conference 2019 in Berlin. In her keynote, she made the technically oriented audience, consisting mainly of machine learning developers and data scientists, aware of the moral consequences of their actions. For human actions are always normative, as are the products that arise from human actions – for example, technologies equipped with artificial intelligence, such as robots.

The term “robot” was coined in 1921 by the Czech writers Karel and Josef Čapek in the play “Rossum’s Universal Robots”. From the original meaning, the Czech word “robota”, which means “forced labour”, several questions of an ethical nature can already been derived:

  • Are robots technical slaves?
  • What kind of human work should robots do?
  • Who decides what kind of work robots should do?
  • What is the general value of work for us humans?
  • What does a society look like in which DDD work (dull, dangerous, dirty) is performed entirely by machines?

Robots have long since become more than just artificially created workers. Today, numerous other domains are already populated by robots:

  • Military
  • Healthcare
  • Automotive
  • Housekeeping
  • Entertainment industry
  • etc.

Each of these domains certainly raises its own specific ethical issues. But for all of them the problem arises how ethical systems can or should be embedded in robots. The classic top-down approach is the fixed implementation of certain rules according to which robot systems act – Asimov’s Three Laws of Robotics are an example.

On the other hand, there is the bottom-up approach. Here, moral behavior is learned. Trial and error, training algorithms and reinforcement learning are the keywords. In practice, hybrid approaches are often used, wherein given rules are combined with learning processes.

Learning does not have to be completely autonomous, though. Human “robot educators” can accelerate or improve the learning processes: Robots learn by imitating their human teachers (Imitation Learning – IL).

After these observations, Dr. Janina Loh concluded that we bear responsibility in dealing with technology on at least five levels:

  • On the level of personal, individual (moral) actions
  • In ethics and computer education in schools
  • In universities and training institutions of the technical sciences
  • In the form of obligatory training courses for enterprises and firms
  • In the form of ethics committees and similar institutional bodies

In this interview, we went into robot ethics in more detail – take a look:

Takeaway #2: It takes many people to make a chatbot smart

Why should we build chatbots? The answer is simple, according to Hans-Peter Kuessner and Jens Polster (adesso AG): they save money. The problem, though, is that many chatbots are poorly trained—and thereby “stupid”.

Either the training data or the training are to blame, there is no continuous training or no clear scope. And many bots don’t offer any user benefits. These problematic chatbots lead to frustrated users as they don’t give the expected answers.

So, what is a better way to create chatbots? As the speakers laid out in detail, chatbot development is a joint effort and takes a lot of planning. The extended chatbot team should consist of data scientists, writers and chatbot mentors. Data scientists cover the analytics, testing procedures, etc., whereas writers create the dialog and have UX in mind.

And then, even when you think you are finished, you will have to retrain the bot according to user input. Chatbots need to be continuously monitored and improved.

Takeaway #3: Voice Revolution – here to stay

MLCon took place in tandem with VoiceCon. Visitors were able to attend the numerous sessions that dealt with the next big revolution: speech recognition and controlling technology through voice.

Francisco Rivas (Navteca), for example, showed how an Alexa skill can be programmed in Python. Utterances, intents and endpoints can be created in the Alexa Developer Console in no time at all. Lambda functions with permissions and access rights to required resources such as logs or databases are then written in Python.

The real challenge, however, is to find a meaningful use case and implement it as intuitively as possible, so that it actually offers the user added value. The crux of the matter lies in the user experience, as Francisco Rivas emphasizes:

One of the challenges is, when using voice cloud services, how to recover/handle the frustration of a user who has said something correctly, but the Skill received something else, causing as little friction as possible. Ways of helping Alexa to understand certain vocabulary better, for example, niche terms or jargon.

Voice has come to stay – all experts agreed on that. VoiceCon curator Ralf Eggert (Travello):

We are still at the beginning of a development that has only just begun. Some people would like to put the topic on par with former “overhyped flops”, such as Second Life, but I am convinced that the topic is here to stay. A linguistic interface is simply the most natural way to tell a computer what to do.

Takeaway #4: Human intelligence has its unique qualities

Almost all work regarding ML intelligence focuses on “how,” but according to Srividya Rajamani (Siemens Healthineers), this is not the main question we should be asking, as she stated in her talk on “building emotionally intelligent machines”.

The hallmark of human intelligence, as she continued, is the ability to ask “what” and “why.” Humans are able to recognize, understand and control their emotions as well as others’ emotions. The challenge is therefore to equip machines with the abilities to detect emotions, display emotions and interpret emotions.

Will emotionally intelligent machines be able to match human excellence? As far as Srividya is concerned, this won’t happen anytime soon. Humans will continue to excel in negotiation, conflict management, teamwork and networking, as well as understanding and motivating human beings.

The question is not whether we can teach an intelligent machine emotions. The question is whether we can make a machine intelligent WITHOUT teaching it emotions.

Takeaway #5: Reinforcement Learning – beware of too much intelligence!

Dr. Christian Hidber (bSquare AG) and Oliver Zeigermann  (embarc) gave shape to the topic of “Reinforcement Learning” in their workshop. Using TensorFlow agents, they used a hands-on approach to show how real questions from day-to-day life can be translated into reinforcement learning tasks. Which stumbling blocks should be avoided?

Christian reports from his professional experience:

A large temptation is always to put a lot of cleverness into the reward function. The reward function is responsible for defining which outcome is considered “good” and which “bad”. The algorithms are incredibly smart at finding short-cuts and loopholes, producing high rewards for behaviours which are definitely “bad”. It seems that the more cleverness you put into the reward function, the more surprises you get out of it.

Takeaway: Sometimes too much intelligence can even be harmful!

Takeaway #6: TensorFlow – low level or high level?

Google’s project TensorFlow crops up everywhere in the context of machine learning. Since TensorFlow 2, a low level API has been in place, which allows you to build neural networks step by step. Oliver Zeigermann (embarc) demonstrated exactly this in his session “Understanding how Neural Networks work”. There it became visible what abstraction power the high level API Keras provides.

In the interview, we clarified which API is useful for which purposes, what role Python, JavaScript and Java play in the ML context, and how TensorFlow projects can be brought into production.

Takeaway #7: Try out deep learning for time series analysis

As part of the track „Advanced ML Development”, Oleksandr Honchar (Neurons Lab / Mawi Solutions) explored time series analysis with deep learning.


Time series data is still mainly processed with standard mathematical and algorithmic routines—but neural networks offer benefits such as anomaly detection. And they are capable of creating generated time series data that can, as Oleksandr emphasized, be more realistic than statistical models.

Further capabilities of neural networks include matching and similarity analysis, as illustrated by examples of EEG patterns and biometrical identification.

Takeaway #8: Deep learning saves lives

The second keynote of the first main conference day was by Dr. Yonit Hoffman. She showed the audience how data can help predict and prevent ship accidents. Why ships? One out of ten ships have an accident every year, often with fatal results.

Possible risks for ships can be predicted via anomaly detection of drifting—or even whether a ship has illegally deviated from its course. Additionally, behavior modeling in bad weather is essential for preventing potentially fatal accidents, as the routes must be adapted to avoid storms.

Temporal convolutional networks turned out to be well suited for time series data as is available for ships, and SHAP values (SHapley Additive exPlanations) are used to explain the models:

ML

Takeaway #9: Automatic image cropping can benefit online sales

Are you tired of cropping images of items you are putting up for sale? Deep learning may be able to help. Alexey Grigorev (OLX Group) explained how his company utilized deep learning for this task, starting out from a student’s master’s thesis. The plan was to create a system for saliency detection to crop out unneccessary parts of the image.
MLML

It turned out that the deep learning based system worked especially well for some categories such as animals, cars and shoes (90% and above) as well as electronics, cellphones and clothing (80% and above). Real estate, on the other hand, was detected at a much lower rate (50% or worse).

But where’s the potential benefit in automatic cropping? Car sales were deemed too important for this task, whereas the use of cropped animal images showed no clear business opportunity. And so the decision was made to apply automatic image cropping to fashion items.

Takeaway #10: Tool-tip BERT

Have you heard of BERT? It’s not (only) the grumpy puppet from Sesame Street, but also a natural language processing model developed by Google. Christoph Henkelmann (DIVISIO) explored BERT’s capabilities in his session. His conclusion:

“BERT is a system than can be tuned to do practically all tasks in NLP. It’s very versatile but also really powerful”

After his session, we caught up with him for an interview, in which we also asked him about his opinion on OpenAI’s GPT-2. What are the current AI algorithms capable of, and what not (yet)? What opportunities do they offer and what possible dangers do they pose?

The post Saving lives with deep learning, creating smarter chatbots and more: 10 takeaways from ML Conference 2019 appeared first on JAXenter.

Source : JAXenter