Part One: Ideation and Research

In 1979, Jef Raskin and Steve Jobs were butting heads over Raskin’s proposed research project: a computer for the masses that could be manufactured in large quantities and sold at an affordable price. In spite of Jobs’ constant pushback, the board approved the project in September.

Five years, many feature trade-offs, several missed deadlines, one drastic change in leadership, and $15 million worth of advertising later, that research project became the first and most successful personal computer with a GUI, mouse, and built-in screen: the Macintosh.

You can read volumes about Jef Raskin’s team and how the Macintosh finally made it to production. Those books were written after the fact, through the lens of long-term success.

But what does it look like when a research project turns into a marketable product that isn’t backed by a Silicon Valley stamp of approval, or the kind of resources that Apple had as a modest $300 million corporation?

This series of blog posts will walk you step-by-step through the series of events that turned a research project into a company launching its first product this year—but it’s not a success story.

It's not a story of failure either. Sentien's story is progressing full steam ahead. We’re sharing it now because people don’t quite know what to make of us. Are we a drop shipping business from Asia, or a new brand from an established company?

We are not. We care. Our goal is to improve human-computer interaction. To make the first all-day interface that works for you so seamlessly that you won’t have to think about it at all. Did we succeed? Let's find out.

In the beginning...

First off, let me introduce myself. I'm Imrich, CEO and co-founder of Sentien.

In 2016, I was in my last year at Rotterdam School of Management. After a year of working full time and studying full time (no, it does not add up), I was thinking about what to do next. While sketching out the biggest challenges we face now and will face in the future, I zeroed in on one.

Getting the most utility out of computers.

I divided the typical process of using a computer into three parts:

  1. What computers can do,
  2. What we can imagine they could do,
  3. Communication between us and computers.

The first was too complex at that time, but the second comes naturally. Therefore I saw the most potential in the third area. The way we interact with technology, we are already “cyborgs;” however, communication between us and tech takes too long and is often more distracting than useful.

The ideal interaction would be to think about something and it's done, but we aren’t there yet. I decided to figure out how close we can get to that truly seamless interaction with today’s tech.

The main question was “Who would be able to create such a solution?” Someone familiar with computers, moral philosophy, hardware, design principles, history, and human-computer interaction. Great minds had already pioneered the way in the field of human-computer interaction.

The basic personal start-up mechanism for research has to be curiosity. I find myself curious about how something works, or I observe something strange and begin to explore it.” --Ivan Sutherland

I took Harvard’s CS50 course to get a basic overview on computer science, and got a job in a design/art studio to learn about the design process and manufacturing. At the same time, I bought books and took courses on design, electronics, ergonomics and anatomy, human-computer interaction, and manufacturing.

I bought myself a few Arduino starter kits and asked a bunch of silly questions to learn how it works.

Early electronic prototypes to test, made from off-the-shelf components such as Arduino boards

I also enrolled in a sci-fi media course to understand how we viewed technology and what can go wrong (and right) in the imagination. (Here are all the movies and books we analyzed in the class). YCombinator Startup School was another great resource on product and users.

And I made a list of human-computer interaction researchers and closely studied their works.

Human-Computer Interaction

The plan to augment humans with technology has been rolling around in the minds of scientists and thinkers for decades. Since day one, I’ve been aware that we’re standing on the shoulders of giants, and have tried to avoid reinventing the wheel.

J. C. R. Licklider: Credited as a pioneer of AI and modern computing, Licklider envisioned a future where men and computers would live in symbiosis with each other, as opposed to men being replaced by computers.

His paper “Man-Computer Symbiosis,” published in 1960, details that vision. After examining the ways he spent his own working days, Licklider advocated for “the use of computers to, in essence ‘augment human intellect by freeing it from mundane tasks.’”

In his book “Libraries of the Future” (which was the final report of a two-year research project), Licklider recognized the usefulness of computers for conveying knowledge.

Mark Weiser: Weiser was a computer scientist who coined the term “ubiquitous computing” in the 80s.

“Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.” --Mark Weiser

You can listen to his 1996 talk “Computer Science Challenges for the Next Ten Years” from when he worked as CTO for Xerox PARC. He outlines this set of principles for ubiquitous computing, where computing is available anytime, anywhere.

  • The purpose of a computer is to help you do something else.
  • The best computer is a quiet, invisible servant.
  • The more you can do by intuition the smarter you are; the computer should extend your unconscious.
  • Technology should create calm.
“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.” --Mark Weiser

In his article “The Computer for the 21st Century” published in 1991, Weiser described issues that would arise in the future, related to the then unknown field of mobile computing. One summary from a former Berkeley professor referred to the article as “more fiction than fact...it is merely a story.” You might come to a different conclusion when you read it.

The last resource from Mark Weiser is an article written with John Seely Brown called “The Coming Age of Calm Technology.” As they wrote in the intro, “Ubiquitous computing will require a new approach to fitting technology to our lives, an approach we call "calm technology.”

Interacting with calm technology should happen at the outer edge of our attention, instead of taking up our full attention.

Psychology of Human-Computer Interaction: published in 1983 by Stuart K. Card, Thomas P. Moran, and Allen Newell, this book talks about how hardware and software can be designed with a more human-centric approach. Psychological principles can be applied to design, as every UX designer knows.

Douglas C. Engelbart: Computers can be used to solve the complex problems of the world. And Engelbart was at the forefront of developing early versions of tools that we find so common today.

“I really haven’t warmed up to this thing yet…” Engelbart chuckles as he maneuvers a list of words on a screen. It’s his shopping list, and he’s shuffling items and labeling parts of the list like “Produce.” It’s the Mother of All Demos, and Engelbart is demoing the first computer mouse, in 1968.

NMAH-AHB2015q000006 Photo of Douglas Engelbart holding the original computer mouse. Image source: NMAH Catalog #2015.3073.10, gift of SRI International

He had begun his presentation by asking a question: “If in your office, you, as an intellectual worker, were supplied with a computer display backed up by a computer that was alive for you all day, and was instantly responsive to every action you have—how much value could you derive from that?”

He had prepared interactive tools to help people collaborate via computing at a time when programmers used punch cards to input information on the relatively few computers available. 1968, and he demoed video conferencing, on-screen text, hypertext, file editing, copy/paste—and this is just a fraction of what he foreshadowed.

Douglas Engelbart using the NLS’s 5-button chord keyset, a standard QWERTY keyboard, and 3-button mouse, around 1968. Image source: NMAH Catalog #2015.3073.11, gift of SRI International

As Alan Kay said, “I don’t know what Silicon Valley will do when it runs out of Doug’s ideas.”

Ivan Sutherland: Sutherland and David C. Evans are known as the fathers of computer graphics. Sutherland is widely known for inventing Sketchpad while attending MIT in 1962 as his PhD thesis.

Sketchpad was a computer program on which the graphical-user interface of today was modeled. That brief sentence does a poor job of expressing just how exciting and groundbreaking Sketchpad was at the time, but you can watch the linked video to see it for yourself.

Ivan Sutherland uses Sketchpad
“In addition to the risk to reputation and to pride, the very nature of research poses its own special risk. In research, we daily face the uncertainty of whether our chosen approach will succeed or fail.” --Ivan Sutherland, Technology and Courage

Emerging Input Technologies for Always-Available Mobile Interaction: This publication describes how we can use the technology currently available to form always-available interactions with computers. Think of how much time and attention a smartphone eats up every time you use it. It demands that you focus both your visual and haptic senses on the interaction. An always-available interaction would enable you to switch more efficiently between what you’re doing in the real world and the task you’re trying to accomplish via computing.

Alan Kay is an excellent resource because he is himself a researcher. He’s frequently voiced valuable critiques of where technology has ended up. It’s always worth spending time questioning industry decisions.

Alan Kay holds a Xerox keyboard as Cookie Monster looks on from the display. Xerox PARC, 1970s.

“When I look at the interfaces today, what I see...I see first a bicycle with training wheels on that people can’t see the training wheels because they don’t know it’s supposed to be a bike. And then the second thing I see, after 20 or 30 years of it is a bicycle with training wheels on it, completely encrusted with jewels and rhinestones...but it’s still got the fucking training wheels on it.” --from a 2013 interview

In a series of lectures at Stanford, you can listen to Kay’s full story and his take on how to think about building for the future.

Recap.

What kind of technology would have the principles described by these experts in HCI? Let’s recap what we picked up from the resources listed above.

  • Computers are an extension of us and should help to free us from mundane tasks without taking up our attention.
  • A product that people love to use must have a human-centered design, including both psychology and ergonomics.
  • The right product design must acknowledge the training wheels and then get rid of them.
  • Tech should disappear from our attention by blending in to the environment.

So far we've seen my starting point and the resources I studied. But there was a lot more practical and hands-on research happening simultaneously to answer another question: Is there already a product that applies those HCI principles? To answer that as I studied, I also bought, tested, and took apart every relevant product I could get my hands on.

Interested in the hardware phase of research? Read Part II, where I detail the hardware products I tested, how I tested them, what I learned from them, and a whole lot more.


Follow us on Twitter to be the first to hear about new blog posts.

Get more done with a seamless audio interface. Pre-order Sentien Audio now.


This post was a collaboration between Imrich Valach and Liz Windsor.