Brett Haas is a fourth-year computer science student at the University of Virginia. His academic focus is complemented by hands-on industry experience, most recently contributing to Generative AI development at Scale AI. He is particularly interested in the practical application of large language models, scalable software design, and the development of robust, high-performance systems.
Brett Haas - CTO
.jpg)

Drew Di Donna - CEO
Drew Di Donna is an Neuroscience graduate student at Georgetown University. His research interests include the intersection of consciousness and connectomics, neuromorphic computing, BCIs, and adversarial testing of large language models. He previously studied neuroscience at George Mason University and Trinity College, Oxford, and conducted research in two research labs focused on neuroinformatics and electrical engineering. He also brings eight years of combined SWE/IT experience, including collaborations with researchers at Oak Ridge National Laboratory, Intel, and the OpenAI Forum.
Our Team
Our Team
.jpg)
Brett Haas - CTO
Brett Haas is a fourth-year computer science student at the University of Virginia. His academic focus is complemented by hands-on industry experience, most recently contributing to Generative AI development at Scale AI. He is particularly interested in the practical application of large language models, scalable software design, and the development of robust, high-performance systems.
Building The First Dog to Human Translator

Patent-pending "Bark to Speech" using cutting-edge neuroimaging data leveraging advancements in LLMs
help me speak human!

From Synapse to Startup: Doggie Talkie Researchers Have Worked with World-Renowned Institutions
“Neither man's closest relative, the chimpanzee, nor dog's closest living relative, the wolf, can use human communication as flexibly as the domestic dog (Kaminski & Nitzschner).”


*bark*
*bark*
How it Works?
Doggie Talkie relies on placing a cap of electrodes (EEG) on a dog's head and translating measured voltages obtained from brainwaves into time series data. This data can then be used in conjunction with LLMs, location data, and human training to predict and play a word through a speaker, emulating spoken language.
Since LLMs could already be used to "act" as a dog and respond to human commands, accurate prediction will be achieved by limiting the total vocabulary to around 50-100 words and supplementing the prediction model with the aforementioned EEG data, location data, and trained calibrations from human owners to gain context.
In essence, the Doggie Talkie device will act as a coprocessor to your dog's brain, acting as a simulated language production area (Broca's area).







