Cookies help us deliver our services. You can find more information in our Privacy Policy. Learn more



Our mission

When humans perform tasks and solve problems, they rely heavily on their common sense knowledge about the world. A detailed understanding of the physical world is however still largely missing from current applications in artificial intelligence and robotics. Our mission is to change that. We are developing new, ground-breaking technology that allows machines to perceive the world like humans.

How it works

Our technology can analyze human actions and extract real-time information from video streams. We build deep learning systems using our large video datasets about common world situations and then fine-tune them to specific use cases with minimal effort.

A novel database

One of the limiting factors for advancing video understanding is the lack of large and diverse real-world video datasets. To circumvent this bottleneck, we have built a scalable crowd-acting™ platform and have created some of the largest industry video datasets for training deep neural networks.

A unique approach

Our deep neural networks are pre-trained on our datasets of crowd-acted videos. The datasets contain short video clips that show a wide range of physical and human actions. We then transfer the capabilities of our trained network to contribute to specific video applications.

Use Cases

Our machine learning systems excel at deciphering complex human behavior in video

Gesture Recognition
Automatic detection of dynamic hand gestures for human-computer interaction
Personal Home Robots
Visual scene understanding for domestic robots interacting with humans
Elderly Fall Detection
Automatic detection of accidental falls among elderly people at home
In-car monitoring
Automatic detection of driver and passenger behaviors
Our Data Factory

Our Data Factory

Through crowd-acting™, we are constantly growing our large-scale datasets that help machines to see and understand the world.

Learn more

Our Core Datasets

Tailored to the needs of product groups and industrial R&D labs

Our core dataset is used to teach machines common sense and basic physical concepts
The world's largest video-based dataset for reading dynamic hand gestures
Human Actions
Our scene understanding dataset is used to detect human behavior and complex actions in context

About us

We are a technical team that is re-defining how machines understand our world

Roland Memisevic, PhD
CEO, Chief Scientist & Co-Founder
Dr. Christian Thurau
Chief Solutions Architect & Co-Founder
Dr. Ingo Bax
CTO & Co-Founder
Moritz Müller-Freitag
Raghav Goyal
A.I. Engineer
Joanna Materzynska
A.I. Engineer
Héctor Marroquin
Crowdsourcing Supporter
Waseem Gharbieh
A.I. Researcher
Guillaume Berger
Principal A.I. Researcher
Till Breuer
A.I. Engineer
Florian Letsch
A.I. Engineer
Nahua Kang
Product Marketing Manager
Sarah Rose
Operations Manager
Erick Dennis
Senior Software Engineer (Consultant)
Dr. Nicolas Gorges
Senior A.I. Engineer
Tippi Puar
Operations Manager
Robert Groth
Head of Sales and Business Development U.S. (Consultant)
Oleg Mikhaylov
Senior Infrastructure Engineer
Dennis Schön
Senior Software Engineer (Consultant)
Mark Todorovich
VP Embedded Systems
Cornelius Styp von Rekowski
Junior A.I. Engineer
Check our Open Positions


Nathan Benaich
Peter N. Yianilos
CEO Edgestream Partners
Ingo Fründ
David J Fleet

Contact Us

Send us a message

Contact information


Twenty Billion Neurons GmbH
Stralauer Allee 2
10245 Berlin

+49 30 5564 3880 |


Twenty Billion Neurons Inc.
310 Spadina Avenue
Suite 301
Toronto, Ontario, M5T 2E7

+1 647 256 3554 |