Need help with virtual reality descriptions

Penny

Well-Known Member
Joined
Feb 22, 2018
Messages
169
Hey, first time posting here. I'm only just starting out on my first attempt at writing anything of any length. Most of my stuff so far has been dungeons and dragons related or game design stuff, Scripts ect. So I am used to some pretty bare bones structuring.

Anyhow, I am writing a story which heavily features the usage of virtual reality, the idea being that people plug in matrix style and experience it as though they were there.
But I am having trouble working out how to go about describing the usage of a virtual user interface system.

My first idea was a 2d hovering menu that could be seen and touched only by the user because 3d menu's are notoriously difficult to work with and... lets face it who wants to describe navigating a 3d menu or anything like that.

Was wondering if anyone knows any good examples for user interface descriptive type stuff where I could get a feel for describing it.
I don't really want to treat it like a mobile phone by saying "so and so swiped through menu's until he found blah" type thing.

Would love some advice on describing computer/phone ect usage. thanks :D
 
Ready Player One does this, but the best in my opinion is Daniel Suarez in his books Daemon and Freedom [tm].
 
Honestly, I think you can probably be summary when talking about menus and navigation, unless it is crucial to your plot. No need to give every detail of the layout and interface.

That said, I have an Oculus Rift, and often menus attach virtually to your wrist, so you can just lift your arm and you see it almost like a control panel there. I could imagine something similar... a 2D HUD pinned to your virtual arm or something. Usually they aren’t “fixed” on the frame of your viewport becuase then you can’t ever turn to face them straight on. You could look up YouTube videos for some Oculus games to see how people have actually tried to solve this (though without playing the games, it will be harder to tell how they really work and how effective they are).
 
Good point Zmunkz, A few anime I've watched chose the open in front of the user like a book but I can't see anyone using a menu like that while actually running or doing anything but if it opens up linked to an arm you are still free to act and it doesn't block your view if you have to fight or whatever.

Also I'm a 3d animator, ive got an occulus :p making a VR game too, we solved our ui problems with a wrist computer that acts like the pipboy in fallout 3-4

I definitely will take a summary approach to the majority of menu actions but when a new action like hacking or using some kind of app that behaves like magic in the virtual world I'm probably going to find myself referencing it in some way.

Hmm... maby a floating menu that can be called up floating in front of either hand, acting kind of like a phone with scrolling menu's, functions and icons. set up so you could use it one handed or bring two in front of you and it turns into a keyboard. They don't have to behave like real world physics and can do things that would be impossible in the real world after all.

I was toying with a personal assistant approach but.. I think something like that would be too expensive for my main character to afford for now.
 
If you are looking for subtle; it might be difficult--otherwise it might just look strange.

I was thinking in terms of a menu they can see that requires the blink of an eye. Maybe activated by squinting both eyes or even crossing the eyes and then use a furrowing of brow to page and maybe left and right eye blinks that are much longer blinks than natural eyeblinking.

Or have a menu that floats over the back of one hand and use the fingers of the other hand to actuate buttons. That way they could sit contemplatively with hands together and push buttons--if the recall where buttons should be they might be able to do so without looking.

Just some food for thought.
 
I wouldn't use eye controls for a very good reason... picture someone navigating menus with their eyes and eyebrows while you look at their face :p

At the moment the idea is that its say 400 or so years in the future so humans having some 200 or so years to adapt to cybernetic augments, virtual reality and stuff, that they have developed the ability to "blink" with their mind, they can essentially control the input of about 2-3 buttons worth of control but it does require a focused mind to do. so you could say answer a phone. navigate left and right on a menu and activate or deactivate things.

More complicated controls require voice, hands, ect.

My virtual menu system now is basically a kindle sized device that only the user can view and interact with, the device can be attached to any part of the body as though it was a magnet, its shape, size, color and menus are all customizable. so its a lot like a phone user interface without any hardware.
Toying with having it able to float but haven't had a call for it yet in story.
You can make it appear and disappear with a mental command too so its not in your way. making it appear will make it appear in your hand generally speaking.


Real world controls are probably going to be a phone like tablet thing and mental controls, voice ect. I don't llike the whole controlling something with your eyes there is too much possibility you'll accidentally open ebay and buy a bunch of stuff.
 
the idea being that people plug in matrix style and experience it as though they were there.

Jacking in like The Matrix versus using haptics like Ready Player One would be very different. In RP1 the user must move to a degree. Jacking In would have far more options. The computer should virtually read the users mind and eaaily support non-human bodies.

psik
 
I think that human brains are wired to work as a human, not as for example a dog, so you could have a bipedal shape but The brain is wired for 4 limbs, fingers, thumbs ect. it's not wired to wag a tail. nor is it wired to control your phone without you touching it.

So your nerve interface has to either interpret an existing signal and re-wire it to something else to give you say an extra limb. an example would be an amputee twitching chest muscles to control a robotic arm, the same premise holds with intercepting nerve signals. to control a limb you don't have means rewiring stuff and then the subject has to learn to use their brain in a different way.
(well that's the rule system I figured out anyhow)

So I think a non human body would be possible with a great deal of practice, but I don't think for example a human could adapt their brain to control an octopus body, but a spider... I mean spiders don't have hands so.. fingers rewired to the legs? I mean that's how I would see it working.
As for reading minds to get commands... I really do not think anyone would want software capable of reading their thoughts. or writing them... because yeah... who wants their brain hacked or their innermost thoughts sold to the highest bidder... well other than google and the government.
 
I think that human brains are wired to work as a human, not as for example a dog, so you could have a bipedal shape but The brain is wired for 4 limbs, fingers, thumbs ect. it's not wired to wag a tail. nor is it wired to control your phone without you touching it.

So your nerve interface has to either interpret an existing signal and re-wire it to something else to give you say an extra limb. an example would be an amputee twitching chest muscles to control a robotic arm, the same premise holds with intercepting nerve signals. to control a limb you don't have means rewiring stuff and then the subject has to learn to use their brain in a different way.
(well that's the rule system I figured out anyhow)

So I think a non human body would be possible with a great deal of practice, but I don't think for example a human could adapt their brain to control an octopus body, but a spider... I mean spiders don't have hands so.. fingers rewired to the legs? I mean that's how I would see it working.
As for reading minds to get commands... I really do not think anyone would want software capable of reading their thoughts. or writing them... because yeah... who wants their brain hacked or their innermost thoughts sold to the highest bidder... well other than google and the government.

Although exciting, but how do you translate this science to drama is the big question. But it doesn't have to be a trouble as if you want to write your characters extra limbs or even give them spider legs, you can do that in the description. And then translate it into action without going into how it was made.

For you, as a writer, the important thing is that you can understand the physics and mysteries of your technology and then being able to translate it to readable form that is enjoyable to larger audience. You definitely have the skills, the imagination and ability to research. I'm going to be slightly intrigued to see what you post in the critiques. (y)
 
Mmm, coming from an roleplaying background, I like a complete consistent world with depth, 90% of what I have in my world building document will never see the light of day in the actual story, but the rules, factions, economy and science is what drives the actions of everyone in the story.
I can explain things if I really think its needed but I'd rather not have to.

I think the things that are not said about your world but that still effect it can be more powerful than blatantly stating "this is how it all works" because it makes the reader feel that the world is complete, has structure and permanence.
 
I wouldn't use eye controls for a very good reason... picture someone navigating menus with their eyes and eyebrows while you look at their face :p

Actually, a number of existing VR games use eye control for menus. They typically require you to stare at the menu option for a couple of seconds to trigger it, so it's not an ideal solution, but does work. Floating menus that you can touch generally work better.
 
For most its not eye detection yet, its head detection currently. but yeah floating menus are closer to our own experience so they make it more relatable to the reader.
 
Hi there,

Not sure if this is what you’re after but if you watch the anime Sword Art Online it’s about people trapped in a VR game, they have menus. In fact it’s hook to the story, so may be of interest and it’s a great Anime series. 3d Menus have been mooted many times but from memory back in the day of Virtual Reality Modelling Language was a thing (I saw a live demo). The menus were like barrels that you span round to be relevant as opposed to having a list. Also from memory a 3d/2d menu system was the file storage in Hackers (film) and Jurassic Park (Briefly at the end).

Not written descriptions but at least shows how it’s been done in the film world.

IttB
 
I'd also consider looking at how BrainPal works in Old Man's War as a possible UI, looking at how your synapse connections are with the VR reality.
 
I'd also consider looking at how BrainPal works in Old Man's War as a possible UI, looking at how your synapse connections are with the VR reality.

Exactly, a version of "Jacked In" versus Haptics.
 
For most its not eye detection yet, its head detection currently. but yeah floating menus are closer to our own experience so they make it more relatable to the reader.

Yes. Actual eye-tracking is still R&D, though I'd expect it to be included in the next generation of headsets, even if only to more accurately show what players are doing in multiplayer environments.

But head-tracking and eye-tracking aren't that much different with the existing models because the field-of-view is small enough that you mostly look straight ahead anyway.
 
wow, that was a good story, you are not wrong.
and very relevant to this current era we live in.
 

Similar threads


Back
Top