Hardware Testing

Summer, 1981. IBM released the original IBM personal computer, in direct competition with the four-year-old Apple II. In response, Steve Jobs took out a full page ad in the Wall Street Journal with the blaring headline: Welcome, IBM. Seriously.

Jobs let the Macintosh team buy an IBM PC, which they promptly dismembered and analyzed. They compared their findings to their work on the Macintosh, and concluded that the IBM PC was already obsolete.

Of course, the IBM 5150 wasn’t the only competitor product the Mac team got their hands on. After Jobs turned the Mac research project into full scale product development, the team wasn’t shy about pulling resources and information from Jobs’ former team, the Lisa project.

Routines like QuickDraw, Lisa’s 68000 microprocessor, and even personnel were pirated from the Lisa project. Simply put, the Macintosh team was able to build a better product by learning from both the successes and failures of other products.


When we left Past-Imrich in Part I, I was studiously teaching myself whatever I needed to learn in order to both ask and answer the right questions.

-How can we get the most utility out of our computers?

-Who would be able to design a way to get the most utility out of the human-computer interaction?

It wasn’t all book learning, however.

What are the options?

Before I started testing, I examined my options to see what was available.

Smart glasses? The technology simply was not ready. The options were heavy, painful for long-term wear, and bulky. The functionality was limited, as was the configurability. The battery life fell short, and then there was the socially unacceptable privacy concern of camera recording.

Brain-computer interface? The ultimate goal is to be able to think of something and get it. BCI solutions are not good enough to capture thoughts or even triggers. Most are based on non-invasive EEG reading, but the resolution is low. Some “BCIs” were based on reading electrical signals from the muscles.

Either way, they're not usable for all-day wear due to low utility, bulkiness, and battery requirements. This is a profoundly simplified look at BCIs; we'll write a full blog post in the future. Now, did I want to wait another ten to twenty years for that technology to come along? No.

Audio? This was interesting. Recent developments in chips had made all-day wireless headsets possible. The battery could last all day. Devices were getting smaller and smaller. That technology was ready.

Hearing is a high bandwidth and fast channel for receiving information from computers. Therefore, audio seemed to be the best possible route for building a truly all-day seamless interface, with today’s tech.

Hardware Testing

I started buying and testing as many products—smart speakers, bluetooth headsets, headphones, wireless earbuds, smartwatches, smartphones—as I could get my hands on to study and learn from. I ruled out hearing aids, since the microphones are poorly placed for catching speech.

I was hoping to use pre-existing hardware and then connect it to a smartphone app I would make. That way I could get control and maybe change the firmware of that device to communicate with the phone.

I had an office and a studio equipped with a 3D printer and tools in the city center. At times, my Dutch flatmates wouldn't see me for weeks. They thought I moved out. I spent all my time researching and testing in the studio, and occasionally attending class to finish up my final year at university.

At first, the goal of testing was to find out whether or not it was possible to have an AI assistant on hand at all times.

If you put a smart speaker in every room of your house (as some people do), suddenly you have an audio interface everywhere you are. This version has disadvantages.

The interface is no longer private or exclusive to you, since everyone around you can also listen to the same interface. They’re location bound. It’s the same with car infotainment systems.

I bought a high-quality Bluetooth headset and a bone conduction headset. The first phase of testing was examining use cases as I wore each headset for an entire day. I used each one to listen to music and podcasts, make phone calls, and interact with an AI assistant.

Once I found out what each product could do, I looked for the negatives to see what could or should be improved.

  • Battery: the battery died after a couple hours of use, for both headsets.
  • Sound quality: the Bluetooth headset I used put out monaural sound, much lower quality compared to stereophonic.
  • Appearance: it was hard to forget that I was wearing either headset. The bone conduction headset had the advantage of leaving my ears open, so I didn’t have to think about it to take it off or put it back on, but it didn’t fit my head properly. The back loop left a gap between the back of my head and the headset, which I could feel moving or catching on my clothes. The Bluetooth headset sat in only one ear.
  • Customization: All the headphones and headsets I tested had something in common: they were not configurable. Their buttons could do just one preset thing each. I wanted to have an AI assistant accessible at all times, but the buttons were preset to call or to play/pause music. I wanted to change this behavior, I wanted to experiment, try new things.

The disadvantage of earbuds or headphones was that I couldn’t forget about them. I had to remember to take them out, put them back in, charge the battery, wait for them to finish charging. Some didn’t fit well and kept falling out.

Furthermore, they isolated me from my environment and required an exchange of audio realities. There was no option to blend the two audio realities of earbuds and surroundings. I had to choose to listen to the earbuds’ output or switch to the sounds of my environment.

The one advantage that really stood out was the microphone quality of the Bluetooth headset compared to the mic on the bone conduction headset. The Bluetooth headset had a much better mic.

The more phone calls I made, the more I realized just how important a feature a high-quality microphone was.

Conclusion

I was still looking for a hardware solution already on the market that fit human-computer interaction principles and also had all the features and benefits that I pulled out based on my own tests.

  • A battery that could last all day
  • An ergonomic design so comfortable I could forget I’m wearing it
  • Audio input: a high-quality microphone
  • Audio output: bone conduction transducers
  • Configurable button functionality

Interfaces like smart speakers or earbuds work great in certain contexts. But making sure the interface is available creates switching costs. You have to be close to the smart speaker, put earbuds in and take them out. You can’t hear what’s happening around you, or the battery doesn’t last a full day.

Those are compromises to a seamless all-day interface. These form factors cannot create that kind of experience.

So I decided to build my own.


Looking back, I didn’t realize at the time what a journey all my experiments would turn into.

It all started with a goal to have an AI assistant accessible at all times with the least amount of friction. Same with calls, listening to content, or taking audio notes. It was exciting to work on a challenge I cared about while (possibly) helping to make sure human-computer interaction developments yield the most utility for us.

We’ll end Part II here, and pick up in Part III with my adventures in hardware design.


Did you miss Part I? Read it now.


Interested in the final result of all this research and testing? Learn more about Sentien Audio.


This post was a collaboration between Imrich Valach and Liz Windsor.