Quantum Foam in the VR – TimefireVR

Quantum Foam of VR

Where exactly does Hypatia exist? It lives in the place in-between, beyond the three-dimensional Euclidean space we currently know.

This city is not in the organic world of matter that we know; it “grows” in the quantum foam, where we find the very foundation of the fabric of the universe. It is sandwiched between the technology that humanity has brought forward and our own existence as sentient beings. We are introducing a reality that may someday compete with our ideas of what’s real.

We barge into this alien world known as VR and cannot be certain we will not alter the space-time continuum. Back in 1439, Johannes Gutenberg introduced moveable type; he could have never imagined the electronic moveable type we use today, where photons are emitted through a glass layer powered by a wafer-thin battery you may be reading this with – we call it a smartphone. For him, this magic might have broken his ideas of the space-time continuum and what reality meant to someone in the 15th century.

Here in the early days of the 21st century, we are about to sling the mass of reality into the space between. In the millimeters from our eyes and a new kind of tele/microscope, we will aloft Hypatia into existence. The perception and meaning of it will be yours to discover.

What will exist in those moments of virtual habitation and observation may defy known physics and phenomena; you will be an explorer of the unknown.

We bring discovery, curiosity, and intellectual wandering directly to your imagination. We will challenge your mind to comprehend the untrammeled reality that lives in the in-between, in the quantum foam of potential. Welcome to the new space-time continuum of Hypatia.

Social Networks – TimefireVR

Social

Social Networks. This is not a concept that came to humanity with the emergence of Facebook, MySpace, or Twitter. The Social Network is an essential part of the fabric that defines our very culture, and it stretches back many millennia to the beginnings of our use of symbolism.

In caves, on beads, and on cliff walls, we find markings, images, and other traces of messages left for others. This form of artistic communication transcends space and time and is some of the earliest proof that humanity was building a primitive social network. You see when someone leaves “graffiti” for others to come across at some future point, they have left an implicit message, “I existed, and I leave you this clue to my having been here before you.” This brings the visitor into the social web of having shared the same space with the original poster.

As society has evolved, we strove for better communication that would give more effective insight as to who we were. Using cuneiform symbols and, later on, hieroglyphs, we started leaving exacting details about our history and accomplishments. Sculpting arrived about 2300 years ago, while woodblock printing came to us 1,100 years after that. As technology improved, we witnessed the 15th-century arrival of the printing press using movable type, but still, we would need another few hundred years for the steam engine to help launch mass production and give us the ability to print lots of newspapers and books that would in-turn allow greater sharing and distribution of information.

These accomplishments were essential in allowing us humans to share knowledge as it pertains to history and for the creation of literature that would allow us to dream forward about what the future might look like. We were well on the road to a global social network.

Approaching the 21st century we witnessed an explosion of communication technology. Radio, movies and TV, the phone, the Internet, and, more recently, the smartphone have all been instrumental in networking our globe. The people that we reach out to in far away places, sharing conversation, photos, and cat memes, form our new social network – it is immediate and spontaneous. Often, though, our view is clouded by brand awareness by defining the social network as something that resembles Facebook.

Bruno Latour, in his book, “Reassembling the Social – An Introduction to Actor-Network-Theory,” attempts to bring clarity to this complex idea by letting us know that the social network is not simply the closed system of interactions between known actors in a particular location. The network is complex and is now likely impossible to define as the breadth of global inputs from our modern communication and entertainment system is working dynamically and chaotically to connect people independent of systems of association and geographic proximity.

This is an important concept because as we humans start to explore Virtual Reality, we are closing the final gap between information and experience. From early history leading up to today, we have been observing the world and building experiences based on our physical location and access to scarce resources. This required our proximity to or observation of landscapes, artifacts, images, videos, and lectures about the particular subject matter. From this, societies and organizations would arise, allowing like-minded participants to share their particular curiosity.

With the advent of VR, all are welcome to visit the cave of the mysterious image. Every one of us will be invited to interpret newly discovered hieroglyphs from places that only exist in the imaginations of those sharing their art. As experiences are a large part of how people illuminate culture and history, we can project that the form and substance of global communication will need to change as it responds to millions of new vantage points brought on by VR. The shared environment will no longer be locked to a geographic locality or certainty about cultural perspectives. Our written and visual languages used to interpret this new social network will need to evolve as rapidly as the art and tools that are producing this shift.

It is in our nature to leave a mark, be it momentarily in the dirt, like a handprint on a cave wall, maybe with pigments on a canvas, or as little letters printed on the screen before you. What will this all mean when our mark is left upon this new type of reality? What kind of culture and history will we create? Where will our art be found by future generations who explore the places we have been while immersed in Virtual Reality? How will they interpret our social networks that will have continued to reach out across space and time?

Reappearance – TimefireVR

Cyberpunk in TimefireVR

It’s been a long time since I’ve penned anything other than business documents. In the intervening months since last I dropped words, Timefire has gone through some changes like so many start-ups do.

One giant change is that we’ve grown. A year ago, we were six; today, we are nearly twenty full-time creators. We changed our name and changed it back again and that’s after we had already done that before.

Like many other early VR adopters, it appeared to us that Virtual Reality was nearly at our doorstep. In reality, it started to look like a fast-moving target that might find a consumer release in some distant future (fortunately not another decade or two like back in the 1990s). Now it's said we’ll see the first consumer products by the end of this year, go Valve/HTC and Samsung.

While Samsung rolls out the new Note 5 and GearVR for the general public, we are not building for mobile unless you consider the work we’ve been doing with 360 videos. While we all love immersive 360-degree video, and it will find its place in our VR world and our production skills, it is an explorable interactive Virtual Reality that we are looking forward to the most.

And this is where our efforts have been focused.

We had to scale. Our original plan and budget said to stay small and spend slowly, but back then, there was just Oculus, no Facebook, no Magic Leap, no billions of dollars being spent wildly. Then, in a burst toward the latter part of 2014, the VR renaissance kicked into high gear and made the quantum leap. Either we adapted to what was happening or we'd be irrelevant before we saw the light of the display mounted an inch from our face.

A programmer goon or two joined a database guru and started designing the backend that would drive our customer sign-ups and allow us to build all types of interesting new tools and functionality. More artists came on board, including a terrific digital sculptor, a level developer worth his weight in gold, a dedicated texture artist, and other creators who helped round out our growing team.

With this heft of creative skills, we were now able to deploy, it was time to make some big changes to our world. We put our noses down and went dark. We have a lot to do.

Substance Designer – TimefireVR

Substance Designer

I’ve been spending more than a few days in Allegorithmic’s Substance Designer, working over a massive amount of textures I’ve downloaded from GameTextures. The process is tedious, especially the Metal PBR workflow, but after more than a few days grinding through a directory with more than 250 base materials, I’m kind of addicted. At night, I go home and work on some simple stuff, assembling my horde of images from CG textures. These are easy as it’s just a single bitmap I have to wrangle. The glue that is making all of this possible is Allegorithmic’s new tool found in Bitmap2Material 3.0. It’s a “Node” that works as a kind of plugin for Designer. Feed the node the images you want to be converted for use in a PBR workflow, and the node does the heavy lifting. But of course, nothing is ever totally easy, and so I wrestle with Masks, Emissive textures, Blend nodes, Levels, and the adjustment of Normals in order to get the Substances just right for our shared library. Between Allegorithmic’s Database of procedural textures (about 850 of them), the 1000 CGTextures, and the 1000 GameTextures files I’m working with, I could be at this for quite a while. In the end, I think this will prove to be an invaluable asset to our team, though I might have a momentum that will demand I just keep going exploring the possibilities this amazing software offers us.

Bitmap2Material from Allegorithmic – TimefireVR

Bitmap2Material from Allegorithmic

Bitmap2Material 3.0 was released by Allegorithmic yesterday and now boasts a PBR workflow. Physically Based Rendering, or PBR, has been making great inroads this year, with all game engines now supporting it or being about to. For those who need to know, PBR allows different surfaces to appear more photo-realistic due to the way light bounces off of these channels. If you are interested in knowing even more about how PBR works, the guys at Marmoset have a great article that goes into depth about the specification; click here to read it.

After you install the program, all you need to do is drag an image into the interface, and Bitmap2Material will compute all of the required channels, such as Base Color, Roughness, Metallic, Diffuse, Specular, Glossiness, Normal, Height, Displacement, Bump, Ambient Occlusion, Curvature, and Detail Normal. But that is only a small part of the magic being offered; it is the parameters in the right column that really show off the power of B2M 3.0. Besides now being able to work with 8k, 16k, and 32k (gasp) textures, there are eight other main categories of options to affect your image. A caveat regarding those super large images: I’m using a GTX 980 with 4GB of RAM as my GPU and 8K images bring this new card to its knees; it would appear that a Titan with 6GB or a Quadro card with 12GB would be required for the heavy lifting that those sizes and larger images would require.

With your image loaded, it’s time to get busy setting up your new material for export. The list of operations and adjustments is lengthy, too much to dive into here today. Better you download the FREE TRIAL and start exploring what Allegorithmic has unleashed.

B2M3_Node

While this new incarnation is a fantastic development, it is what is included in the Pro version that is truly amazing for our work. Allegorithmic has created integrations that allow B2M 3.0 to work inside 3DS Max, Maya, Modo, Unity, and Unreal Engine (sadly not Blender), but even this is not what makes this version truly perfect. It is the inclusion of two nodes that offer the full functionality of B2M 3.0 to work inside Allegorithmic’s Substance Designer. One of the new nodes is purely for a PBR workflow; the second one is a dream node here at Timefire – it’s been specifically created with the Unreal Engine 4 material workflow in mind.

Once the node is installed, drag it into the Graph view and bring any bitmap into the program. Feed the output of the bitmap into this specialized node and then the output of the Bitmap2Material node to the output nodes, and the rest of the work is done for you. In mere seconds or less, the Outputs are calculated, and Normals, AO, Curvature, Height, Roughness, and more are ready for export or further modifications. It is that easy to use.

Clicking on the Bitmap2Material 3.0 node in Substance Designer opens the “Instance Parameters” column, which allows the same granularity of modifications found in the full B2M 3.0 program. Something else that needs pointing out, this version of B2M supports Mikktspace Tangents – a way of calculating Normals popular with xNormal, Blender, 3D-Coat, and, as I understand it, Unreal Engine. We are yet to test how exactly what this means to our workflow, but anything that brings better quality and compliance with industry-respected tools is a welcome addition. While B2M 3.0 supports Mikktspace Tangents, users of Substance Designer will have to wait a short while until those guys at Allegorithmic push out version 4.5 – rumored to be coming SOON.