Designing Human-Computer Interaction in Science Fiction

Lenny

Press "X" to admire hat
Joined
Jan 11, 2007
Messages
3,958
Location
Manchester
tl;dr - bad design is killing present and future technology, for the simple reason that what looks good in films, and sounds exciting in literature rarely translates to something that works well in reality and practice. As we have the potential to be the "next big thing!", whether in television, film, or literature, we ought to take care with our designs, because we may just influence computing for the next decade.

---

I have been having an argument with myself for about fifteen minutes on where this best fits - General Film Discussion, General Book Discussion, Technology, The Lounge, or General Writing Discussion. Eventually the GWD side of me barked the other sides into submission, with points about it being a useful topic in which everyone (not just the computer and design nuts) can discuss design in regards to their works, using a whole range of examples whether they be academic, filmy, booky, and so on.

The GWD side of me hopes that he was right, because if he isn't the rest of me is not going to let him live it down.

---

This thread has been inspired by this opinion piece: How 'Minority Report' Trapped Us in a World of Bad Interfaces, and I thoroughly recommend that it be read.

---

Human-Computer Interaction (HCI) is a field lying at the intersection of other fields of study such as computer science, design, and many behavioural sciences, that looks at how users interact with computers, and aims to improve these interactions to make computers more usable and receptive to a user's needs.

If it weren't for HCI research, then we wouldn't use a computer mouse, or windows-based user interfaces (not the Microsoft product, but what it's named for). Talking about Microsoft, they realised, way back during the development of Windows 95, that users might not be able to work out how to access their programs. However, everyone knows the meaning of the word "start", so if you present users with a button captioned "Start", they'll click that first. That's HCI-driven design.

As an interesting aside, within HCI people argue about the use of a floppy disk as the "Save" icon. For those of us who used computers before the turn of the millennium, a floppy disk is an easily identifiable object, so the icon makes sense - click this picture of a floppy to put what you're looking at onto the floppy. But for people whose formative years in front of a computer involved newfangled storage methods (heck, anyone born after the tail end of the nineties), such as the CD or USB pen, the icon probably makes no sense at all. So how can we update the icon without the change being confusing? Short answer: all of the software I have running at the moment just uses the word "SAVE" in a menu. Kind of boring, actually.

There is so much more to HCI, and it is a fantastically interesting subject when you really start thinking about it, but I don't want to bore people who don't really care.

As its name suggests, HCI also covers the way people physically interact with computers, and this is why I've started this thread.

If you read the article I linked to at the start, you should know where I'm going. If you haven't read it, I'll wait. ;)

Quick summary: Minority Report inspired a generation with thoughts of touchscreens and gesture-based interaction. Unfortunately, because the vast majority of people simply iterate on things they've seen that stand out, we're getting bogged down with terrible design that forces users into interaction that just doesn't work well!

Whilst Minority Report is the most iconic offender, it's not the only one. The recent series of Marvel films are just as bad - Tony Stark does laughable things with his hands to control all sorts of bizarre displays, and everyone at S.H.I.E.L.D. uses transparent monitors and personal devices! I, and many others, believe that gesture control and transparent displays have a role to play in our future, just not in the way they are depicted in fiction...

Gesture control is arguably one of the most exciting alternative interaction methods to the mouse and keyboard, and has been a hot research area for over thirty years. I took a crack at implementing it for my Master's dissertation - I bought a Microsoft Kinect and used it as an input device for gesture-based interaction on a Windows PC, and boy, was it fun! I felt like a vengeful GOD closing windows, running programs, and controlling my mouse cursor from the other side of the room with dramatic swipes of my arms. In my mind, Chrome tabs were the sinners I was smiting with powerful bolts of lightning. I could even feel the barrier in front of my body created by the software that when crossed gave me my power. I got weird looks from anyone who happened to glance through my window or walk past my door, but did I care? I was living the future! For about five minutes at a time. See, there's a well-documented problem with touchscreens called "gorilla arm" that also applies to hand-waving: turns out that the human body was not designed for your arms to be held out for long periods of time.

Google (or "Bing", or "Duck". Use what you use. They just don't sound right as verbs) "gesture-based interface" and the whole history is there. Look through the Google Scholar results and you'll find plenty of papers and journal articles describing the implementation of gestures -- for example in systems to control robots -- and a good deal that look at designing the best gesture sets according to various metrics, including "intuitiveness".

Whilst I chose my dissertation research question solely for the fact that it sounded like a super cool project I was certain I'd have fun doing and would get a good mark in, I did actually learn things. I went in thinking that it would be awesome to be able to control my desktop computer with a wave of my hand from my bed, and came out having read the research and learnt from my experience testing the project that full control is not something you want to be doing through gesturing. There's a reason the mouse and keyboard still reign supreme (although touchscreens are giving them a run for their money).

It is my personal opinion that gestures should be used to add functionality to systems, rather than completely replace existing functionality. Although it occurs to me that Charlie Brooker may have written it with a smug voice in his head congratulating him on how brilliant his commentary on the prevalence of stupid interaction methods is, there was a scene in the first episode of the second series of Black Mirror that I thought showed gestures done well - the main character was sat with her laptop reading e-mails, and used simple swishes of her hand to move between messages and delete them. The rest was keyboard and touchscreen.

Despite it doing everything I wish it wouldn't, the recently released Leap Motion controller is an interesting bit of kit that would be useful in accomplishing such things.

Transparent Displays as a thing aren't necessarily stupid. Whilst there is a totally different question about how they're lit, they can be quite useful - just look at Google Glass and other wearable computing devices, or the heads-up displays (HUDs) in aeroplanes and the way that images can be projected onto clear surfaces. There's an argument about using vivid colours, lest the images be lost against the background (so no, Mr. Stark, you don't want to use whites and light blues), and tinting the glass when it's in use (tint the back layer of it black to increase the opacity, for example, and you'll have far less trouble seeing things on it), but that's plain sense. No, my problem here is one of visual design. Just take a look at the screens on the bridge of the Avenger's flying boat and tell me what they show at a glance. You can't do it.

Now, as a computer scientist, I can only admit to being a lowly programmer who finds joy in the theory of computation, analysing and writing algorithms, and building things that work but look monstrously ugly. I believe that the world would be a far easier place to create if everyone could make do with command line interfaces, and although I may sometimes dabble in web design I usually end up creating things that look nice at a glance but some become obvious as amateur attempts to mimic really beautiful design (my current favourite website in terms of design is Polygon, a gaming site from the company behind The Verge, my previous favourite). In short, I am not a designer. However, I can say with confidence that good design is not cramming as much information as possible into a space and highlighting it with bright colours, but rather that it is visualising the data in a way that allows you to understand it at a glance. I'm pretty sure that the designers and other softwarey people on the boards will give similar opinions.

If it's something that interests you, look at the work of Edward Tufte, and the Data-Ink ratio.

However, design is obviously not just about visualising data. It goes into everything we use - our mobile phones (whether smartphones, dumbphones, or feature phones), the operating system on our computer, the TV guide on our set-top box. It's also present in the non-digital world - just look at how the central console in your car is laid out, think about what controls you use the most and see where they are in relation to the driver. If you don't drive, do the same with a remote control for a television/Hi-Fi/box.

---

As writers, what can we do? I guess the simplest answer is: stop and think about it. If you want go further, then actually try it yourself - imagine you're controlling a device using gestures, and wave your arms around; get a piece of card, think of it as a touchscreen, and try out the interactions you've created; heck, get a piece of paper, draw out what you see in your mind (all it needs to be is simple shapes and lines) and try to use it. Explain the system to a friend, without telling them how it's used to complete a task (e.g. give them the set of gestures and what they do, then ask them to use that knowledge to act out a scenario. If it's zoom out, save, close the browser window, let them figure it out from the set of gestures rather than you telling them to wave one way, then the other) and get them to try it out.

If Chrons members can give critiques on the way things on written, and what the content is like, then why not also on interfaces and human-computer interaction that you describe? Like with writing, there are enough technical members who know the theory, and a lot of members will likely be happy to try to imagine it and give thoughts. You need only ask.

---

Wall of text over. If you made it this far, have a digital gold star: *!

Hopefully this thread has got people thinking. I'm sure some of the designers are going to chip in, as will the engineers who will tell me that transparent displays aren't as great as I say they are, or work in different ways. :rolleyes:

Either way, it would be nice to get a discussion going about how fiction is getting it wrong. I can't think of any examples from books to complement the examples from film, but I'm sure someone can.

And even if you don't know the theory behind it all, I'm sure you've got opinions on what you think is done wrong. Post them! If it turns out that there are reasons behind things being the way they are, and someone can explain them, then we all gain that little bit of extra understanding.

Come on, prove the part of me that thinks this should be in GWD right! ;)

---

EDIT: This obviously also applies to stories set in modernish times, but I guess it also applies to fantasy, it's just that I should have been in bed before I started posting and in this state I can't think how. I don't know... all trebuchets are built with the control mechanisms looking the same way, and being put in the same place, to aid familiarity between different models and designs?
 
Last edited:
I like that you clearly put a lot of thought and effort into the topic, but unless the particulars of a user-interface are actually important to the story, I don't think there's any reason to worry about them. And to say we should because we might influence computing for the next 20 years is more than a bit egotistical. The vast majority of sf has no real predictive element that isn't almost immediately seen as laughable.

As a librarian with a love of the future, the particulars of user-interface and where it could be going are really important to me. But as a writer, it's basically a non-issue in the sense that it's literally irrelevant to telling a good story. I'm sure once I've said that someone will find a story or come up with a premise that is determined by the particulars of UI.
 
Although I agree that a UI is unlikely to influence the overall story, I don't see why it should be used as an excuse to not do it properly. If we're wishy-washy in our explanations, however brief, then surely we leave ourselves open to accusations of laziness? The simple act of designing something feasible that also works well will add a feeling of confidence to our writing that is hard to fake. Even if all of our design doesn't make it into the final draft, it should still show through. At the very least, any interaction described in the story will be consistent. When elements of a story break down under scrutiny, then you have to wonder if the writer was actually trying, or if those elements were just thrown in to fill a gap.

As for the possible egotism, yes, it is egotistical to think that I may influence design, or plant the spark that kicks off the next revolution in interaction with machines, but being realistic takes the fun out of everything. If I think I have the chance of inspiring even just one person, if I can leave my mark on just one imagination, then I will consider myself successful. I'd rather work towards the possibility of great things than towards the likelihood of little success, because I believe my work will be better for it.

The reason I got into computer science is not because I liked sitting at a machine doing my homework. It's not because computers were seen as luxury devices as I grew up (I'm a nineties kid), so using one was an exciting treat, either. I am a computer scientist because fiction let my imagination run wild with all the possibilities these incredible little machines can achieve. I wanted to be part of that. If the fiction I read or saw on television/in films was half-baked and badly realised, then I would have become a historian as history is the greatest story of all. Predictive elements can boil their heads (they're only seen as predictive when they've come to pass anyway) - it's the fantastical that inspires.

I just think the fantastical should be done right. I'd like to think that if Minority Report had shown realistic gesture-based systems then we'd be using them by now.

I've said it in other threads, and I'm sure I'll say it again - we already have all of the components for a future that seems a decade away, we're just not using them in the right ways. If we weren't bogged down by trying to mimic flashy interfaces in fiction, we could be ahead of where we are as a race technologically.

Of course, opinion is opinion. It's just that mine will inevitably delay anything I work on so that I can squeeze that extra bit of authenticity out of it! :p
 
C'mon, context!

I have realistic expectations for where I will end up with my writing (a couple of books in a bargain bin, and no recognition as I go through my days as a code monkey), and the ones that keep me writing (film franchises, a knighthood, being bigger than the SF greats themselves! Most importantly, inspiring generations to come to stretch the boundaries of possibility - after all, great SF beautifully captures the wonder of discovery). Without almost unattainable goals, what is there to aim for? There's no fun in planning for a dead-end career entering meaningless data into a spreadsheet.

In terms of the writing, whilst it may be beyond anything we can accomplish in this day and age, it should at least be thought through thoroughly. If I'm inspiring a generation, I don't want them to work towards ideas with flaws that are obvious when you stop and think. If it's hard for people to suspend belief, then you're doing it wrong.

If I must, then here's a compromise, should you be so minded - give the reader/viewer enough of something realistic for them to think that what you present is possible (I'm sure there's a whole field devoted to researching how audiences perceive the realism of fantastical ideas). Without a tether to the real world, ideas can easily float away into the absurd and distract from the story (I guess you can also say that the less an idea stands out, despite being completely futuristic, the more effective it is. Look at how everyone now views spaceships - obvious grounding in reality, but we're unlikely to question the feasibility of things like the Serenity because they've been thought through and engineered).
 
That was cheap, I admit it.

When dealing with other "fields", say engineering, astrophysics, etc, I can easily see a story actually resting on the specifics of how the math and physics work. Like Robert J. Sawyer's Shoulders of Giants. A great quick read. Go ahead. I'll wait.

But (and maybe this is a lack of imagination on my part), I don't see the particulars of how a user interface work as being that crucial to a story. At least not crucial enough to warrant the kind of research your average SF writer would put into getting the physics right for a piece.

I'm just talking about prose here though. In more explicitly visual medium I'm sure a bit more thought should be given to this, but again, I don't see it as vital to the story in any real way.

But to play ball, there's not really going to be any visible UI that we'd need to worry about after the next say, 10-15 years. With Google Glass, electronic contact lenses, and implanted chips that let you directly link to systems with your mind, I really don't think this will be a field to really dig deep into generally, much less with an eye toward major influence.

I don't agree that a lack of questioning a particular SF thing (a la the Serenity) is an adequate measure of verisimilitude. The audience suspends its disbelief for a variety of reasons, one is that they accept the object as presented, another is they simply don't care and just want a fun ride.

(My dreamy-time goals are award ceremonies. Mostly of the time travel variety, wherein I get to attend past awards shows and meet the giants whose shoulders we stand on. But that's me. If you're going dream, might as well really go for it, no?)
 
Woo! I got a gold star! :)
See the sort of thing that registers most clearly with me?

Right, a little more serious. I think you've got a point, Lenny. There's research being done into a device that beams info directly on to your retina. That kind of annoyed me, because I wrote a scene with just such a device, and then found out it was actually being worked on. Grr! That's leaving aside the direct input chips mentioned by FH.

The reason you've got a point is that, to my mind, it's part of the broad scheme of the universe you're painting in a story. It might not be the be all and end all of the work, but anachronisms stick out. Even if I only write for myself, I want it to be as good as I can get it. If I'm reading, or watching, there's only so much I can take in terms of suspending disbelief - stupid and illogical makes me reach for the off button/throw the book down.

As I'm on the topic, how many books are, in many ways, thought experiments? Isn't that too a valid reason for fiction? Not just older works by Clarke, Asimov and Niven (or Kafka), but Banks' Culture has aspects of it, off the top of my head - I'm sure there are plenty of others. The technology is often part and parcel of that experiment.

I've been thinking about how to portray the tech in my work, as a decidedly non-techy person. You've given me a hand in focussing those thoughts, so thanks.
 
As a person who tries to get into comic writing-drawing more than prose-writing, I'm pretty interested in desings and interfaces. I've both read the link you've given and your article. There are some ideas that I agree with and there are some I don't. But I think most of what is being said is a little exaggeration. I'm not trying to say that designs in sf are pretty and useful but they aren't as bad as you make them to be.

First of all, the avengers thing. It's not that hard to understand what is being displayed on those monitors. I agree that there is too much data packed into a single screen but it's not hard to understand. And the article in the link almost makes hand-gestures into villains. I think most of the gestures work well and are reasonable enough to be used. Smartphones are easy to use and they employ these hand-gestures that are accused of being unnatural.

But, I wholeheartedly agree that most of the desings in sf are generalised and unoriginal. It needs to be more original, future-technology shouldn't be all about hover-bikes and hover screens. What I want to see more is things like Ghost in the Shell. The desings in that series is simply beautiful. They don't scream "Hey, hey I'm technological better be aware!" but they are distinguishable enough to make out that they are products of a superior understanding of technology.

For example they use cables to connect and synchronize with other people and/or computers. This is the otiginal idea that also inspired the matrix. I believe this is a perfect example of how desings should be: original and reasonable. http://www.mkygod.com/matrixgits/neck2.jpg

Ghost in the Shell also uses the hover screen in a reasonable and original way. This is the interface of the cyberbrain and they are clearly hover screens however they are only visuals created through brain and they can be seen by that person only. They also don't require gestures to control, because they are created by brain as virtual data, they are used by thoughts.http://digital.leadnet.org/images/2008/08/08/gits_1.jpg

Another great example is the device Ishikawa uses to dive into the net. http://www.anime-gift.com/gallery/media/ghost-in-the-shell-standalone-complex/ishikawa/48309deb-d81e-11df-8228-a8bfc396a36f.jpg

This next example is pretty unrelated but I'm still going to write about it. Recently I've stumbled across a hidden gem called Heat Guy J. It's an unpopular anime and even though it's made by some pretty popular designers who also designed escaflowne, it is hard to find. The title sounds pretty cheesy and what you'd expect from a B-class movie or something but it's unexpectedly good. But what blew me away was the desings. It's not too far-fetched and extremely new but it is something we don't see much.

The android desing for example. In most sci-fi works, androids are works of incredible technology and they work with pretty high performance. But the android in this series has a over-heating problem after prolonged instances of action and has to let out high-pressure steam. I found this idea refreshing and brilliant. I couldn't find a good image on google and I'm too lazy to screenshot one and upload it so I'm going to post a video. If you look around 0:22 you'll see what I mean:

http://www.youtube.com/watch?v=6Hh78m0GC8o

Anyways, this has been a long wall of text and I'm not sure if anyone's going to read it but this is my opinion on the subject. Thanks for sharing the article, even though I don't agree with most of what is said I still learned a lot from it.
 
Some of us can recall the days when voice was going to be a major part of interacting with a computer (at least according to SF on the TV); presumably one driver was that it was cheaper to employ a voice actor speaking in a "computer accent" than building a plausible interface with a more visual appeal (in the same way as the Star Trek transporter saved a lot of money on props and time on screen).

Has this possible future disappeared, or is it simply on hold (in the real world) because other technologies (touchscreens and gesture recognition) are, at the moment, easier to implement reliably?




** - By the way, the spacefaring aliens in my WiPs avoid using voice-based HCIs, for an as yet unspoken (;)) reason, so I'm asking this simply to widen the debate.
 
In the middle of the article you quote it says:
It’s important, of course, to put this in context. Minority Report came out in 2002, and we had touchscreens for a long time before then.
I was about to say something similar myself. The first time I noticed touch-screens on TV/Cinema was the LACARS system in Star Trek: The Next Generation. But that wasn't a new idea to the designers of that show, they just picked it up from current research. In fact, I'd say that with most scientific ideas TV & film is always slightly behind the curve. The reason is the the time it takes for a film/TV series idea to get pitched, picked up, written, financed, piloted, filmed and eventually shown or syndicated. These projects take years. Books and more especially magazine short stories can be published far quicker, and so are always more on the ball.

What film/TV does do is to help to show those ideas to a greater audience than the small group of people who read scientific papers on obscure subjects, such as Human-Computer Interfaces as an example. I don't see how that can be a bad thing. That designer in the article would have a bigger problem if all his clients came to see him and had no idea what they wanted; at least there was a basis to begin with.

Personally, I see the ideal Human-Computer Interface as something that is totally inconspicuous. We will need to use it while we are doing something else at the same time - driving a vehicle, operating machinery, carrying out a medical procedure, playing a sport - so the voice activated or hard-wired, hands free, retina projected ideas make much more sense to me than something that needs you to move your arms to work.
 
When dealing with other "fields", say engineering, astrophysics, etc, I can easily see a story actually resting on the specifics of how the math and physics work. Like Robert J. Sawyer's Shoulders of Giants. A great quick read. Go ahead. I'll wait.

A nice story. I wasn't too keen on the beginning, but it did pick up. I can't help but feel some sadness, though - it strikes me that the colonists will probably end up repeating this song and dance every time they reach their new destination, and never actually settle.

But (and maybe this is a lack of imagination on my part), I don't see the particulars of how a user interface work as being that crucial to a story. At least not crucial enough to warrant the kind of research your average SF writer would put into getting the physics right for a piece.

Whilst it's very likely that I have a strong bias stemming from my academic interests, I am convinced that we're going to start seeing increased popularity in emerging genres within science fiction that focus heavily on human interaction with computer systems -- a resurgence in cyberpunk, maybe, if not something new (post-cyberpunk? New wave cyberpunk?) -- because as a race we are living with such high levels of hyperconnectivity. Much like the Cold War and the Space Race kicked off a classic age of science fiction that dealt with the wonders of discovery out in space (amongst other things, of course), the advent of the smartphone and personal computing on a tiny physical and massive digital scale could kick off another classic age of science fiction. If the market grows, then why should we not see writers researching design and HCI in a way that they currently research maths, physics, and engineering?

Again, because of my bias, I find that my enjoyment of fiction can be ruined by bad computer science, much like the way Afghanistan and Iraq veterans reacted to the way the military characters acted in The Hurt Locker.

---

I've been thinking about how to portray the tech in my work, as a decidedly non-techy person. You've given me a hand in focussing those thoughts, so thanks.

A pleasure. :) I'm just wondering where everyone else who might be able to help has got to. I hope they're not hiding from the possibility of being quizzed, in the same way that people ask question of the biologists, or geologists, or physicists, or the lawyers on the boards.

---

But, I wholeheartedly agree that most of the desings in sf are generalised and unoriginal. It needs to be more original, future-technology shouldn't be all about hover-bikes and hover screens. What I want to see more is things like Ghost in the Shell. The desings in that series is simply beautiful. They don't scream "Hey, hey I'm technological better be aware!" but they are distinguishable enough to make out that they are products of a superior understanding of technology.

I love Ghost in the Shell (the anime series at least - I've not read the manga)! I agree that a lot of the technology is well-designed, and I always keep some of the examples in the back of my mind as I create my own, not only because they are good to learn from, but also because they parallel reality nicely, in some cases, and can be built upon beautifully (particularly things like the interfaces with the 'net).

---

Has this possible future disappeared, or is it simply on hold (in the real world) because other technologies (touchscreens and gesture recognition) are, at the moment, easier to implement reliably?

Could it not be argued that the technology and the understanding just wasn't good enough when voice was last considered as "the future"?

For you, Ursa, I have found a couple of papers on using Hidden Markov Models to recognise speech (although I can't comment on their reputability - arXiv and CiteSeerX might have good collections of articles, but I don't think they're subject to the same level of peer review as traditional journals), a Wired article that touches upon Google's efforts with neural networks (out of all of the speech recognition software I've tried, Google's implementation in the Jelly Bean release of Android is by far the most accurate), and, of course, Wikipedia:

http://arxiv.org/abs/1003.0206
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.61.3128
http://www.wired.com/wiredenterprise/2013/02/android-neural-network/
http://en.wikipedia.org/wiki/Speech_recognition

---

If anyone else is interested, Hidden Markov Models (HMM) are powerful statistical models that uses the probabilities of its states, and outputs, to determine the most probable output for a given input. If you want to really get into it, a HMM is a simple example of a dynamic Bayesian network.

Artificial Neural Networks (ANN) are mathematical models inspired by biological brains (in that they consist of interconnected "neurons" that process information).

With enough test data, both HMMs (and Bayesian networks) and ANNs can be trained to recognise inputs that are similar to those it has learnt, and give a predicted output that is likely to be right. The more test data, the cleverer they become.

So any system that relies on an input that can be broken down into specific patterns (such as speech, or gestures -- hand waving direction, hand poses, etc) can utilise HMMs or ANNs to provide accurate responses.

Of course, it's worth noting that whilst these can be termed "artificial intelligence", the understanding shown by both models does not yet translate to independent thought (though given time... after all, what is thought but a highly complex pattern based on a huge set of variables?).

---

Personally, I see the ideal Human-Computer Interface as something that is totally inconspicuous. We will need to use it while we are doing something else at the same time - driving a vehicle, operating machinery, carrying out a medical procedure, playing a sport - so the voice activated or hard-wired, hands free, retina projected ideas make much more sense to me than something that needs you to move your arms to work.

Surely it depends on the application? Hand-waving obviously has no place in keyhole surgery (the horror of a surgeon sat behind a desk waving his hands to control the robotic arm wielding a scalpel!), but in something like 3D design it could prove useful - although Tony Stark looks a madman with some of his hand-waving, being able to visualise a three-dimensional model and rotate it using natural movements might have use when showing engineers designs, or architects mock-ups, to clients. On the other side of the coin, research is being done into gesture-based interfaces to help the disabled, with small hand movements to control systems, for example, or systems that understand and teach sign language (just a couple of examples of things I came across when doing literature research for my dissertation).

The future of interaction is in touch, voice, and movement - in instances when one method doesn't work, another is likely to.
 
Woo! I got a gold star! :)
See the sort of thing that registers most clearly with me?

WooHoo - dito.

I have a WIP that uses the whole connectivity thing as a turning point in the plot, and it's key because the humans just use the tech almost without thinking. I think the use of computers will be virtual for us the user, and it will over lay the world around us and be super easy to use. The really advanced stuff won't employ hand movements (save that for the movies), it will be thought driven. Great for a storyline because the reader is in on the thoughts of the characters and the tech interaction, crap for a movie because they might just be walking along window shopping! I'm using the term "linking" for computer interaction, a character is saying one thing and linking another, etc. It adds loads of layers into a plot real quick and can be fun to write.
:rolleyes:
I have to be honest, hand movements etc never even dawned on me, my SciFi world has moved on. :cool:

Good post, Lenny.
 
I have to admit that my (non-voice-based) HCI does play its part in the plot of my WiPs, Fishbowl Helmet**, though the precise details of the interface are not (necessarily) key to this.


And thanks for those links, Lenny. :)





** - Presumably (parts of) a fishbowl helmet can form one part of an HCI. ;):)
 
I have to admit that my (non-voice-based) HCI does play its part in the plot of my WiPs, Fishbowl Helmet**, though the precise details of the interface are not (necessarily) key to this.

And thanks for those links, Lenny. :)

** - Presumably (parts of) a fishbowl helmet can form one part of an HCI. ;):)

I have a wip about a kid getting his first set of electronic contact lenses. So that he's interfacing with a computer as he's walking around is a big part of the story, the spine really, but the particular interface, where and how it's displayed in his vision is basically irrelevant, except when he's spammed with ads that block his view. But that's not quite the same as what's being talked about up thread or in the linked articles.
 
I have a wip about a kid getting his first set of electronic contact lenses. So that he's interfacing with a computer as he's walking around is a big part of the story, the spine really, but the particular interface, where and how it's displayed in his vision is basically irrelevant, except when he's spammed with ads that block his view. But that's not quite the same as what's being talked about up thread or in the linked articles.

I think it is related to the article. Lenny is asking why Hollywood gets it so wrong and what are (future writers, fingers crossed) SciFi writers thinking today. Three of us are having direct interfaces with computers and writing about how this affects daily human life. So I'm glad to say, we seem to be ahead of Hollywood for now! ;)
 
I agree, Bowler. Even small, 'inconsequential' elements of the story can cause the whole thing to fail, if there isn't a consistency. Big high-tech world, and then some silly error. Too many, and I'm likely to put the book down, or stop watching.
 
I started watching Star Trek: Deep Space Nine recently (my first venture into Star Trek, believe it or not!) and, as I tend to do when watching SF with interesting tech, I've been trying to work out how the systems work -- the transporters, for example. Not how they energise people, but the contact between two transporters, the protocols, handshakes, acknowledgements, and how they must have been standardised way back when to allow transporters made by different races to communicate with each other.

The one that I think is relevant to this thread, however, is the communication with ship computers -- "Computer... open a subspace link to Bajor", "Computer... engage engines", "Computer... tell me I'm pretty". It's an elegantly simple and effective way of ensuring that the computer only responds when it's asked - the crew have to say "computer" for their commands to be interpreted.

Most of the time...

Every fifteen or twenty commands, or so, the computer responds to something that isn't preceded by "computer", which annoys me a little, but I'm willing to let it slide for now.

The most interesting thing about this is that modern products (a full twenty years since DS9 aired, though I doubt that it's the first time this interaction has been shown) have adopted similar interaction methods. Google's Glass, for example, requires the user to say "OK, Glass" before every command.

In terms of the interface being key to the plot, there have been two episodes (in the first series) where this interaction has been brought to the fore: in one episode the crew of DS9 are hit by a neurological virus that affects the way they speak (and thus they can't communicate with the computer), and in another the way the computer responds indicates that it has been infected with a virus.

---

Just a nice example I wanted to share.
 
Interesting reading. I'm thinking I'm going to have to do some rewriting of my computer-ish descriptions now. In my world everything seems to be displayed via orange holograms, for some reason. I've probably been playing too much Mass Effect.

And I agree, little details are very important.
 
Around the turn of the century I was involved with a real time animation system – Doctoon, a cartoon character who interacted with kids in a hospital, and frequently elicited more honest responses from them than traditional medical personnel.

The interface was a graphics pad, which gave two dimensions of control like a mouse. When you lifted the pen off to change what characteristic one axis was controlling, the character went dead. So your other hand was on a conventional ASCII keyboard, with shortcut caps. One eye was on the video camera showing the kid(s) in the ward, another watching the screen to see what your character was doing, a third checked keyboard reflexes (so as not to generate a sudden somersault during a sympathetic bit), and your toes needed eyes on them to avoid the two dimensional (swivel and tip) expression pedal (developed from a guitar effect pedal) from getting confused with the very similar one for background pan and zoom. Mouth movements, obviously, followed a head-worn microphone, the harmonised signal from which would be fed through the room's speakers.

Previous to that, in the very late sixties, I had the privilege of working on a totally remote-able TV camera, where pan, tilt, zoom, focus, height of pedestal, and, to a certain extent, the position of the camera in the room (it was on railway tracks) could all be controlled by one hand, with a special rotating, swivelling, screwing, sliding controller hand built for the purpose (the camera was in Heathrow airport, the controller in television centre, so they could do interviews of arriving passengers without having to be on the spot. And have one hand free to fire off telecine machines, operate a microphone mixer, or scratch with, I suppose.

And, of course, I've worked with many "alternative" controllers for musical instruments. Bob Moog never wanted the standard equitempered black and white keyboard for synthesizers; too rigid, not expressive enough (but it was what people were used to, so it stuck) Ribbons, twisty things, theramin-like space detectors; but they're not intuitive, and show no symptoms of universitality.

Touch screens are great, but… Very much but, in my line of work. I really do like to have a physical knob selected to turn a control; doing it with the screen itself is not natural yet.

And it's all maximum two dimensions, when for complex control we need lots more; probably six or seven, involving different parts of the body for which we have natural independence. Dancing control, natural feeling so we don't have to be watching everything at the same time. Not just computers, musical instruments and animation programs, but cranes, airliners, railway stations, even airports.
 
this is probably old news to everyone else in this room, but I just heard that eye tracking was the latest thing in CI. it was described to me as "you're reading down the page and the tracker sees your eye get close to the bottom and scrolls up for you" this makes sense to me since i just watched a really old documentary about how advertisers have been using blink-monitors to gauge add effectiveness for decades, and I remember some film makers talking on the special feature of some movie about the old days when they would sit in the theater with their test audience to get a feel for what their reactions were, "because it takes people a wile to realize they are board, and will tell you something is boring when it was the bit just before that that lost their interest."
Sorry if those seem random reasons to believe something, strange bits of information lodge themselves in the cracks of my brain till something builds paths between them.

I almost love world building more than I love writing. (Really I love story telling but thats leading me off point.) I completely agree that what gets written and out there shapes the future. I mean, some guy bought up all the air space for signals to travel through after watching star treck and when cell phone companies got invented they had to buy/ rent space from him. (If I have been mislead and was being totally gullible to believe that story someone correct me now, please.) UI on blood glucose monitors is getting very close to the "let me just touch you with this device and know what's wrong with you" imaginings of early SF.

Its out there. And I think its awesome.
 

Similar threads


Back
Top