Lenny
Press "X" to admire hat
tl;dr - bad design is killing present and future technology, for the simple reason that what looks good in films, and sounds exciting in literature rarely translates to something that works well in reality and practice. As we have the potential to be the "next big thing!", whether in television, film, or literature, we ought to take care with our designs, because we may just influence computing for the next decade.
---
I have been having an argument with myself for about fifteen minutes on where this best fits - General Film Discussion, General Book Discussion, Technology, The Lounge, or General Writing Discussion. Eventually the GWD side of me barked the other sides into submission, with points about it being a useful topic in which everyone (not just the computer and design nuts) can discuss design in regards to their works, using a whole range of examples whether they be academic, filmy, booky, and so on.
The GWD side of me hopes that he was right, because if he isn't the rest of me is not going to let him live it down.
---
This thread has been inspired by this opinion piece: How 'Minority Report' Trapped Us in a World of Bad Interfaces, and I thoroughly recommend that it be read.
---
Human-Computer Interaction (HCI) is a field lying at the intersection of other fields of study such as computer science, design, and many behavioural sciences, that looks at how users interact with computers, and aims to improve these interactions to make computers more usable and receptive to a user's needs.
If it weren't for HCI research, then we wouldn't use a computer mouse, or windows-based user interfaces (not the Microsoft product, but what it's named for). Talking about Microsoft, they realised, way back during the development of Windows 95, that users might not be able to work out how to access their programs. However, everyone knows the meaning of the word "start", so if you present users with a button captioned "Start", they'll click that first. That's HCI-driven design.
As an interesting aside, within HCI people argue about the use of a floppy disk as the "Save" icon. For those of us who used computers before the turn of the millennium, a floppy disk is an easily identifiable object, so the icon makes sense - click this picture of a floppy to put what you're looking at onto the floppy. But for people whose formative years in front of a computer involved newfangled storage methods (heck, anyone born after the tail end of the nineties), such as the CD or USB pen, the icon probably makes no sense at all. So how can we update the icon without the change being confusing? Short answer: all of the software I have running at the moment just uses the word "SAVE" in a menu. Kind of boring, actually.
There is so much more to HCI, and it is a fantastically interesting subject when you really start thinking about it, but I don't want to bore people who don't really care.
As its name suggests, HCI also covers the way people physically interact with computers, and this is why I've started this thread.
If you read the article I linked to at the start, you should know where I'm going. If you haven't read it, I'll wait.
Quick summary: Minority Report inspired a generation with thoughts of touchscreens and gesture-based interaction. Unfortunately, because the vast majority of people simply iterate on things they've seen that stand out, we're getting bogged down with terrible design that forces users into interaction that just doesn't work well!
Whilst Minority Report is the most iconic offender, it's not the only one. The recent series of Marvel films are just as bad - Tony Stark does laughable things with his hands to control all sorts of bizarre displays, and everyone at S.H.I.E.L.D. uses transparent monitors and personal devices! I, and many others, believe that gesture control and transparent displays have a role to play in our future, just not in the way they are depicted in fiction...
Gesture control is arguably one of the most exciting alternative interaction methods to the mouse and keyboard, and has been a hot research area for over thirty years. I took a crack at implementing it for my Master's dissertation - I bought a Microsoft Kinect and used it as an input device for gesture-based interaction on a Windows PC, and boy, was it fun! I felt like a vengeful GOD closing windows, running programs, and controlling my mouse cursor from the other side of the room with dramatic swipes of my arms. In my mind, Chrome tabs were the sinners I was smiting with powerful bolts of lightning. I could even feel the barrier in front of my body created by the software that when crossed gave me my power. I got weird looks from anyone who happened to glance through my window or walk past my door, but did I care? I was living the future! For about five minutes at a time. See, there's a well-documented problem with touchscreens called "gorilla arm" that also applies to hand-waving: turns out that the human body was not designed for your arms to be held out for long periods of time.
Google (or "Bing", or "Duck". Use what you use. They just don't sound right as verbs) "gesture-based interface" and the whole history is there. Look through the Google Scholar results and you'll find plenty of papers and journal articles describing the implementation of gestures -- for example in systems to control robots -- and a good deal that look at designing the best gesture sets according to various metrics, including "intuitiveness".
Whilst I chose my dissertation research question solely for the fact that it sounded like a super cool project I was certain I'd have fun doing and would get a good mark in, I did actually learn things. I went in thinking that it would be awesome to be able to control my desktop computer with a wave of my hand from my bed, and came out having read the research and learnt from my experience testing the project that full control is not something you want to be doing through gesturing. There's a reason the mouse and keyboard still reign supreme (although touchscreens are giving them a run for their money).
It is my personal opinion that gestures should be used to add functionality to systems, rather than completely replace existing functionality. Although it occurs to me that Charlie Brooker may have written it with a smug voice in his head congratulating him on how brilliant his commentary on the prevalence of stupid interaction methods is, there was a scene in the first episode of the second series of Black Mirror that I thought showed gestures done well - the main character was sat with her laptop reading e-mails, and used simple swishes of her hand to move between messages and delete them. The rest was keyboard and touchscreen.
Despite it doing everything I wish it wouldn't, the recently released Leap Motion controller is an interesting bit of kit that would be useful in accomplishing such things.
Transparent Displays as a thing aren't necessarily stupid. Whilst there is a totally different question about how they're lit, they can be quite useful - just look at Google Glass and other wearable computing devices, or the heads-up displays (HUDs) in aeroplanes and the way that images can be projected onto clear surfaces. There's an argument about using vivid colours, lest the images be lost against the background (so no, Mr. Stark, you don't want to use whites and light blues), and tinting the glass when it's in use (tint the back layer of it black to increase the opacity, for example, and you'll have far less trouble seeing things on it), but that's plain sense. No, my problem here is one of visual design. Just take a look at the screens on the bridge of the Avenger's flying boat and tell me what they show at a glance. You can't do it.
Now, as a computer scientist, I can only admit to being a lowly programmer who finds joy in the theory of computation, analysing and writing algorithms, and building things that work but look monstrously ugly. I believe that the world would be a far easier place to create if everyone could make do with command line interfaces, and although I may sometimes dabble in web design I usually end up creating things that look nice at a glance but some become obvious as amateur attempts to mimic really beautiful design (my current favourite website in terms of design is Polygon, a gaming site from the company behind The Verge, my previous favourite). In short, I am not a designer. However, I can say with confidence that good design is not cramming as much information as possible into a space and highlighting it with bright colours, but rather that it is visualising the data in a way that allows you to understand it at a glance. I'm pretty sure that the designers and other softwarey people on the boards will give similar opinions.
If it's something that interests you, look at the work of Edward Tufte, and the Data-Ink ratio.
However, design is obviously not just about visualising data. It goes into everything we use - our mobile phones (whether smartphones, dumbphones, or feature phones), the operating system on our computer, the TV guide on our set-top box. It's also present in the non-digital world - just look at how the central console in your car is laid out, think about what controls you use the most and see where they are in relation to the driver. If you don't drive, do the same with a remote control for a television/Hi-Fi/box.
---
As writers, what can we do? I guess the simplest answer is: stop and think about it. If you want go further, then actually try it yourself - imagine you're controlling a device using gestures, and wave your arms around; get a piece of card, think of it as a touchscreen, and try out the interactions you've created; heck, get a piece of paper, draw out what you see in your mind (all it needs to be is simple shapes and lines) and try to use it. Explain the system to a friend, without telling them how it's used to complete a task (e.g. give them the set of gestures and what they do, then ask them to use that knowledge to act out a scenario. If it's zoom out, save, close the browser window, let them figure it out from the set of gestures rather than you telling them to wave one way, then the other) and get them to try it out.
If Chrons members can give critiques on the way things on written, and what the content is like, then why not also on interfaces and human-computer interaction that you describe? Like with writing, there are enough technical members who know the theory, and a lot of members will likely be happy to try to imagine it and give thoughts. You need only ask.
---
Wall of text over. If you made it this far, have a digital gold star: *!
Hopefully this thread has got people thinking. I'm sure some of the designers are going to chip in, as will the engineers who will tell me that transparent displays aren't as great as I say they are, or work in different ways.
Either way, it would be nice to get a discussion going about how fiction is getting it wrong. I can't think of any examples from books to complement the examples from film, but I'm sure someone can.
And even if you don't know the theory behind it all, I'm sure you've got opinions on what you think is done wrong. Post them! If it turns out that there are reasons behind things being the way they are, and someone can explain them, then we all gain that little bit of extra understanding.
Come on, prove the part of me that thinks this should be in GWD right!
---
EDIT: This obviously also applies to stories set in modernish times, but I guess it also applies to fantasy, it's just that I should have been in bed before I started posting and in this state I can't think how. I don't know... all trebuchets are built with the control mechanisms looking the same way, and being put in the same place, to aid familiarity between different models and designs?
---
I have been having an argument with myself for about fifteen minutes on where this best fits - General Film Discussion, General Book Discussion, Technology, The Lounge, or General Writing Discussion. Eventually the GWD side of me barked the other sides into submission, with points about it being a useful topic in which everyone (not just the computer and design nuts) can discuss design in regards to their works, using a whole range of examples whether they be academic, filmy, booky, and so on.
The GWD side of me hopes that he was right, because if he isn't the rest of me is not going to let him live it down.
---
This thread has been inspired by this opinion piece: How 'Minority Report' Trapped Us in a World of Bad Interfaces, and I thoroughly recommend that it be read.
---
Human-Computer Interaction (HCI) is a field lying at the intersection of other fields of study such as computer science, design, and many behavioural sciences, that looks at how users interact with computers, and aims to improve these interactions to make computers more usable and receptive to a user's needs.
If it weren't for HCI research, then we wouldn't use a computer mouse, or windows-based user interfaces (not the Microsoft product, but what it's named for). Talking about Microsoft, they realised, way back during the development of Windows 95, that users might not be able to work out how to access their programs. However, everyone knows the meaning of the word "start", so if you present users with a button captioned "Start", they'll click that first. That's HCI-driven design.
As an interesting aside, within HCI people argue about the use of a floppy disk as the "Save" icon. For those of us who used computers before the turn of the millennium, a floppy disk is an easily identifiable object, so the icon makes sense - click this picture of a floppy to put what you're looking at onto the floppy. But for people whose formative years in front of a computer involved newfangled storage methods (heck, anyone born after the tail end of the nineties), such as the CD or USB pen, the icon probably makes no sense at all. So how can we update the icon without the change being confusing? Short answer: all of the software I have running at the moment just uses the word "SAVE" in a menu. Kind of boring, actually.
There is so much more to HCI, and it is a fantastically interesting subject when you really start thinking about it, but I don't want to bore people who don't really care.
As its name suggests, HCI also covers the way people physically interact with computers, and this is why I've started this thread.
If you read the article I linked to at the start, you should know where I'm going. If you haven't read it, I'll wait.
Quick summary: Minority Report inspired a generation with thoughts of touchscreens and gesture-based interaction. Unfortunately, because the vast majority of people simply iterate on things they've seen that stand out, we're getting bogged down with terrible design that forces users into interaction that just doesn't work well!
Whilst Minority Report is the most iconic offender, it's not the only one. The recent series of Marvel films are just as bad - Tony Stark does laughable things with his hands to control all sorts of bizarre displays, and everyone at S.H.I.E.L.D. uses transparent monitors and personal devices! I, and many others, believe that gesture control and transparent displays have a role to play in our future, just not in the way they are depicted in fiction...
Gesture control is arguably one of the most exciting alternative interaction methods to the mouse and keyboard, and has been a hot research area for over thirty years. I took a crack at implementing it for my Master's dissertation - I bought a Microsoft Kinect and used it as an input device for gesture-based interaction on a Windows PC, and boy, was it fun! I felt like a vengeful GOD closing windows, running programs, and controlling my mouse cursor from the other side of the room with dramatic swipes of my arms. In my mind, Chrome tabs were the sinners I was smiting with powerful bolts of lightning. I could even feel the barrier in front of my body created by the software that when crossed gave me my power. I got weird looks from anyone who happened to glance through my window or walk past my door, but did I care? I was living the future! For about five minutes at a time. See, there's a well-documented problem with touchscreens called "gorilla arm" that also applies to hand-waving: turns out that the human body was not designed for your arms to be held out for long periods of time.
Google (or "Bing", or "Duck". Use what you use. They just don't sound right as verbs) "gesture-based interface" and the whole history is there. Look through the Google Scholar results and you'll find plenty of papers and journal articles describing the implementation of gestures -- for example in systems to control robots -- and a good deal that look at designing the best gesture sets according to various metrics, including "intuitiveness".
Whilst I chose my dissertation research question solely for the fact that it sounded like a super cool project I was certain I'd have fun doing and would get a good mark in, I did actually learn things. I went in thinking that it would be awesome to be able to control my desktop computer with a wave of my hand from my bed, and came out having read the research and learnt from my experience testing the project that full control is not something you want to be doing through gesturing. There's a reason the mouse and keyboard still reign supreme (although touchscreens are giving them a run for their money).
It is my personal opinion that gestures should be used to add functionality to systems, rather than completely replace existing functionality. Although it occurs to me that Charlie Brooker may have written it with a smug voice in his head congratulating him on how brilliant his commentary on the prevalence of stupid interaction methods is, there was a scene in the first episode of the second series of Black Mirror that I thought showed gestures done well - the main character was sat with her laptop reading e-mails, and used simple swishes of her hand to move between messages and delete them. The rest was keyboard and touchscreen.
Despite it doing everything I wish it wouldn't, the recently released Leap Motion controller is an interesting bit of kit that would be useful in accomplishing such things.
Transparent Displays as a thing aren't necessarily stupid. Whilst there is a totally different question about how they're lit, they can be quite useful - just look at Google Glass and other wearable computing devices, or the heads-up displays (HUDs) in aeroplanes and the way that images can be projected onto clear surfaces. There's an argument about using vivid colours, lest the images be lost against the background (so no, Mr. Stark, you don't want to use whites and light blues), and tinting the glass when it's in use (tint the back layer of it black to increase the opacity, for example, and you'll have far less trouble seeing things on it), but that's plain sense. No, my problem here is one of visual design. Just take a look at the screens on the bridge of the Avenger's flying boat and tell me what they show at a glance. You can't do it.
Now, as a computer scientist, I can only admit to being a lowly programmer who finds joy in the theory of computation, analysing and writing algorithms, and building things that work but look monstrously ugly. I believe that the world would be a far easier place to create if everyone could make do with command line interfaces, and although I may sometimes dabble in web design I usually end up creating things that look nice at a glance but some become obvious as amateur attempts to mimic really beautiful design (my current favourite website in terms of design is Polygon, a gaming site from the company behind The Verge, my previous favourite). In short, I am not a designer. However, I can say with confidence that good design is not cramming as much information as possible into a space and highlighting it with bright colours, but rather that it is visualising the data in a way that allows you to understand it at a glance. I'm pretty sure that the designers and other softwarey people on the boards will give similar opinions.
If it's something that interests you, look at the work of Edward Tufte, and the Data-Ink ratio.
However, design is obviously not just about visualising data. It goes into everything we use - our mobile phones (whether smartphones, dumbphones, or feature phones), the operating system on our computer, the TV guide on our set-top box. It's also present in the non-digital world - just look at how the central console in your car is laid out, think about what controls you use the most and see where they are in relation to the driver. If you don't drive, do the same with a remote control for a television/Hi-Fi/box.
---
As writers, what can we do? I guess the simplest answer is: stop and think about it. If you want go further, then actually try it yourself - imagine you're controlling a device using gestures, and wave your arms around; get a piece of card, think of it as a touchscreen, and try out the interactions you've created; heck, get a piece of paper, draw out what you see in your mind (all it needs to be is simple shapes and lines) and try to use it. Explain the system to a friend, without telling them how it's used to complete a task (e.g. give them the set of gestures and what they do, then ask them to use that knowledge to act out a scenario. If it's zoom out, save, close the browser window, let them figure it out from the set of gestures rather than you telling them to wave one way, then the other) and get them to try it out.
If Chrons members can give critiques on the way things on written, and what the content is like, then why not also on interfaces and human-computer interaction that you describe? Like with writing, there are enough technical members who know the theory, and a lot of members will likely be happy to try to imagine it and give thoughts. You need only ask.
---
Wall of text over. If you made it this far, have a digital gold star: *!
Hopefully this thread has got people thinking. I'm sure some of the designers are going to chip in, as will the engineers who will tell me that transparent displays aren't as great as I say they are, or work in different ways.
Either way, it would be nice to get a discussion going about how fiction is getting it wrong. I can't think of any examples from books to complement the examples from film, but I'm sure someone can.
And even if you don't know the theory behind it all, I'm sure you've got opinions on what you think is done wrong. Post them! If it turns out that there are reasons behind things being the way they are, and someone can explain them, then we all gain that little bit of extra understanding.
Come on, prove the part of me that thinks this should be in GWD right!
---
EDIT: This obviously also applies to stories set in modernish times, but I guess it also applies to fantasy, it's just that I should have been in bed before I started posting and in this state I can't think how. I don't know... all trebuchets are built with the control mechanisms looking the same way, and being put in the same place, to aid familiarity between different models and designs?
Last edited: