The psychology of motion pictures and frame rate.
Movies have been around for over a hundred years. Video tape, on the other hand, made a significant entrance into the consumer market around the 1980s. Also around that time, a one hour motion picture was released for the evening television audience in America. It was perhaps the first motion picture to be produced entirely on video tape.
The most significant difference was in the frame rate and the quality of the video image. Neither were better or worse, just different. The video displayed at 30 frames per second, and the quality of the video looked like a news room broadcast. Films, on the other hand, run at 24 frames per second, and have a grainy appearance due to the film’s chemical composition. The video tape technology was different enough that it was soundly rejected by the viewing public, and all efforts promptly returned to film.
It turns out that the human brain can detect the difference in frame rate. Having grown up on 24 frames per second on the silver screen, people never got comfortable associating 30 frames per second with motion pictures. They subconsciously struggled enjoying the movie due to the frame rate being too fast. That is why that particular motion picture production was one of the first and last of its kind.
What does this have to do with video games?
The psychology of video games and photo realism.
Today’s video graphic cards are far more powerful than those of twenty years ago. There are now graphic cards so powerful that you can render scenes in real time that are difficult to distinguish from live video feeds. These include complex scenery such as forests, lakes, deserts, office buildings, or even people’s faces up close. The popular opinion is that such photo realism can help players immerse themselves in their video games. But is this true, or just hype driven by the excitement and novelty of high fidelity graphics?
People claim virtual reality and augmented reality are the most immersive experience in today’s bleeding edge graphics technologies. Is that true? Or are they using the term immersive improperly to hype the technologies?
To answer these questions and understand the psychological impact photo realism plays on video games, we should first attempt to define the term immersion.
Immersion into a movie or video game is a state of consciousness in which one is “in the plot or action” so much so that they are no longer consciously aware of their surroundings. There is literally no better definition of this term in these contexts. So, let’s examine what causes a player to become immersed in a video game, and also what causes interruption of immersion.
When one is immersed in a video game, they are “in the game”. They see themselves on the map. They see themselves, not controlling the pawn, but as the pawn they are controlling. When they pull the trigger, they see themselves firing the gun in their hand, not controlling a pawn firing a gun in the pawn’s hand. They are thinking of their actions in the game as though the game environment was their reality. Their conscious thought process is driven by what is happening in the game and not what is happening physically around them.
They could also be watching a cut scene in a video game (just like in a move) and have a fly on the wall moment, watching a conversation, thinking about what is being said, its implications for the lives of their characters or those of the people of the world, or learning more about the back story and how it evolves. In any case, they become so caught up in the action or the plot that they are, for a time, unaware of their physical surroundings.
In every case, one key rule applies: everything that they see happening in the game, everything they are experiencing from the visual, audible, and tactile feedback from the game, must be reasonable, logical to the environment, and cohesive to the scene. I say logical to the environment, because in games, physically impossible behavior can be logical for the environment in the game. Once the player learns the rules, the rules must be adhered to so that what the players’ experiences remain consistently logical to the environment they have come to learn exists within the game.
Anything that suddenly seems out of place, unreasonable to the events unfolding, or simply doesn’t make any sense forces the player’s conscious thinking off of the game play and onto contemplating the anomaly. It is this contemplation that forces their consciousness to move outside the game action that is unfolding and onto that singular event that doesn’t belong in the game. They are no longer consciously in the environment that the game created. They are now sitting outside that environment looking inward, examining the environment itself to understand what went wrong. They now concentrate on the game as a simulation on their Xbox.
So how does this relate to photo realism in video game graphics?
Art defines the scene, while graphics generate the scene. So, we can conclude that virtual reality and augmented reality are graphics – they generate the scene, they don’t define it. Photo realism is graphics, and it also does not define the scene – it merely generates it.
When photo realistic geometry is created in a video game, is it art? Not at all. The scene itself may be art. The graphics are not.
Why is this point being driven home?
Because people grew up on Halo, Doom, Half Life, and more recently Call of Duty. Those never had anything close to photo realism. Not anything close. Yet so many have fond memories of these games.
Even the newer titles have better graphics, but they are far from the levels of photo realism that the graphics cards are capable of producing.
In every case, players can immerse themselves in those games, which have nothing to do with graphics. Even low poly art can have a quality of its own. Art helps immerse the player in the game, no matter how simple it may be, if it is beautifully mastered.
So now we get to the big question: Why does photo realism hinder immersion if it makes a beautiful scene?
Because it is too close to realism to be immersive. People look at the tree bark, the leaves on the ground, the detailed reflection of the sun light off the stained brick wall. What do you think goes through their minds? Here are some possibilities:
Boy that person’s face is close, but it’s not quite 100% photo realistic.
I wonder how they got all those leaves to blow so well and look so great?
This scene sure does shine with my new 4090!
Do you see the problem? Photo realism is so new, its very presence interferes with immersion by forcing the player out of the game so that they can look at the game from the outside and admire the technology. This is the insurmountable problem game developers face today.
Photo realism will take a very long time (if ever) to catch on as the primary rendering technology for games. Everyone today will be either too busy admiring it, questioning it, or looking for flaws in it – because it is bleeding edge and what they paid big bucks to admire. In other words, they are so busy admiring the technology being used by the game, that they are no longer admiring the game itself.
Just as video tape cannot replace film with its 30 frames a second and grain free images simply because it is too different for those who grew up with movies on the silver screen, photo realism will not replace the immersive abilities of simple, beautiful art on simplistic graphic cards. Just as Hollywood is so serious about producing quality productions that they turn to old school film, any studio today, triple-A or indie, would be well advised to produce quality art that does not consciously attempt to approach realism in any way.
Today’s gamers are not accustomed to scenes that approach real life, and would be too distracted by any attempt to achieve photo realism to achieve and sustain immersion. Gamers want to play games – they want that escape from reality. Any attempt to force them into reality will hinder them from enjoying the game as for what it was meant to be – a game.