It’s that time of year again. Join me for video game (and board game) development insights at the 2017 Game Developer Conference in San Francisco!
As every year my strategy for maximizing benefit of the conference is to choose two topics I’m weak in, and gain some understanding and/or training. This year my topics are “storytelling” and “board game design”. As it turns out, the tutorial and boot camp sessions this year have day long programs on those two exact items. Check out the following descriptions and links to in-depth articles on insights I gained this time around.
More links will become live as I have time to complete posts. Check back frequently to see what’s new!
Story Telling Fundamentals
Speaker: Evan Skolnick
This was a day long boot camp complete with group exercises. I’m going to break down the day into several posts. Around 90 percent of the talk was about narrative structure in general, with the remaining 10 percent connecting the narrative structure to game design practices. For these articles, I’ll save the game connection for my final post so that we can focus on understanding the narrative structure first.
This is the inaugural year for board game sessions at GDC. The day of talks began with a focus on the psychology involved with play, and lead into some more inspirational talks from designers. Just about every board game presentation came from developers who published a board game, but are rooted in the video game industry. During discussions of player psychology and game mechanics, the line between video and physical games blurred. That’s a good thing.
Microsoft Hololens is a new Mixed Reality computing platform. The user looks through a visor to see digital content projected into their world. The unit works by using built in sensors to build a real-time 3D model of the users surroundings, and use that information as collision and visibility data for interacting with digital content. The computer is entirely contained in the headset, and not connected to another machine. It runs a modified edition of Windows 10. This device makes use of the Unity 3D Game Engine for immersive content creation; however, it will run any Windows Store app in a projected 3D window. For more information, check out Microsoft’s official Hololens page.
What is the CART Wars Hololens Project?
The IGD (Interactive Game Design) course at CART (Center for Advanced Research and Technology) has taken their first semester to utilize the latest hardware innovation from Microsoft (Hololens) as a platform to learn digital content creation and coding for 3D games and simulations. The IGD program is merged with the student’s senior year English requirement (taught by a credentialed English teacher). This year for our fiction reading students embraced the new novel Star Wars: Bloodline by Claudia Grey. Parallel to this students were given the task of creating interactive 3D Star Wars content in the Unity 3D game engine, and then realizing those experiences as Mixed Reality content for Hololens.
What was the result?
The students successfully demonstrated the skills necessary to create interactive content in the form of a traditional game (3D modelling, texturing, coding). We’re still working on our ability to make effective use of Hololens. At the close of this project, students had successfully made their content load. They can walk around there scenes, and objects can interact with the environment; however, they’re still working toward getting the user interaction to be a smooth experience. We still have an entire second semester to go. The student’s are quite capable, and I’m positive one of the project groups will develop a complete experience.
The Project in Pictures (Gallery)
How Can a Student Attend CART?
CART offers an array of different labs focused on various careers. If you have an interest in attending CART’s IGD program or any of the labs offered, here are the basic requirements:
Must be a Jr or Sr in high school.
Must be a student in the Fresno or Clovis Unified School Districts (in CA).
Students do not pay to attend CART. It is not a charter or private school. It is a free program offered to students by the Fresno and Clovis Unified School Districts. For more information, specific lab requirements, and to apply online, visit www.cart.org.
Why is This Project on Step 2’s Website?
Step 2 Digital Design is not so much a business as it is a phase of life. It has become my personal developer blog. I, (Matthew Hodge) am the CART IGD Instructor. Outside of my time teaching, primarily in the Summer months, I am an aspiring indie game developer. I bring the technologies and ideas I professionally play with into the classroom, which is why I chose to have my students pursue Hololens this year. They get experience developing for what’s next, and so do I.
Check out last year’s student project Jurassic Cart for some AR dinosaur action!
“Tactical Twitch” is undergoing major refactoring (video summary at bottom). The current prototype works fairly solid, and displays well; however, it has trouble on low end devices. At this time, I am rewriting all of the functionality. This really sucks, but it will be completely worth it on the other end. Let’s look at what caused this situation, and all of the benefits a rewrite will include (hint: Hololens edition).
After my previous look at object pooling instead of instantiate and destroy, I began to realize a number of ways in which I’ve written unoptimized code. For one thing, object pooling is better done with Lists than Arrays. New insights such as this are coming from both peers and my nearly complete BS degree in software development from Bellevue University. I now have a much better grasp on good object oriented design.
The current “Tactical Twitch” demo represents about 2.5 months of development time, much of which occurred over two years ago. As I look back, I realize it is easier to rewrite the game than to optimize what exists.
At first this sounds like a terrible thing, but it doesn’t have to be. The prototype works well and to everyone I’ve shown it to personally, it has communicated the game I’m trying to make. It has received generally positive feedback from unbiased sources. I not only like what I’m making, but I have proof that it works and is fun. With that knowledge, I’ve left myself a clear blueprint of what it is I’m trying to code. It’s now simply a matter of execution. I look at the current build as a task list for the code I’m now writing.
I suspect there won’t be much visible progress until next Summer. I’ll likely start my next round of active development with something that looks very similar to what I have now. The difference will be that it will run on just about anything that plays Unity built games.
Benefits of a Rewrite
The new version is built with Hololens in mind. The same game will function well on low end devices, as well as deliver a slightly altered experience for Microsoft Hololens.
The new code has a nifty feature I’ve been working on, randomly generated levels. This is very important. As a one man shop it is hard enough developing a working game, let alone all of the content players will expect from it. Level design is largely what held back my “Legend of Sky” game. The platforming was received well with my select audience, but I just didn’t have time to create the levels. The algorithm I’m creating for the “Tactical Twitch” level creation can be reused in any other game I choose, and of any genre. To shorten that up, when complete, I will have largely removed my hurdle of time to develop content. Future games should be made faster.
The last minor (but still important) gameplay issues have be resolved via a slight re-design of gameplay. More on that when I reveal the new demo.
*Video note:My project files load slow in this video, because this is the second part of my GPD Win test. GPD Win is the first handheld Win 10 pc to be available. You can see my full review at techup.step2digital.com.
Thanks for reading and stay tuned folks. I look forward to giving you more updates and ultimately something to play!
This year I’m hosting two panels at ZAPPCON. The first is on my process for developing game mechanics in my current project, “Tactical Twitch”. The second is a look at Microsoft Hololens, which I’ve also been working with. My panels this year are on Saturday, October 15th, at 11AM and 3:30PM. Read below for an overview of each session, and don’t forget to register at zappcon.com!If you’re checked to attend my panels, you’ll have a chance to try out Hololens during the demonstration!
We’ll begin with a brief look at a few popular software tools available to aspiring indie game developers, and quickly move into a talk about what it takes to make “good” games. Led by Matthew Hodge, this session will use his current indie project “Tactical Twitch” as an illustration of his personal journey to crafting an experience that is both enjoyable, and realistic for an individual developer to create. From his experience, we can extract a few pointers and thoughts to guide our own pursuits.
Microsoft HoloLens is the first fully self-contained, holographic computer, enabling you to interact with high‑definition holograms in your world. This a chance for you to get to see and use a HoloLens unit! Create, connect, and explore. Transform the ways we communicate, create, collaborate, and explore. Your ideas are closer to becoming real when you can create and work with holograms in relation to the world around you.
After 10 years of graphics and code, I’ve finally found a “professional hero”. Mark Ferrari is a veteran of the digital arts. He started at Lucas Arts where one of his larger contributions was “inventing” the concept of dithering in digital imagery. That’s right, there was a time when dithering wasn’t actually a thing. Mark is the guy who’s stroke of genius brought so much more life and perceived quality to the limited color pallets of early games. At the 2016 Game Developer’s Conference (#GDC16) I attended the session “8 Bit & ‘8 Bitish’ Graphics-Outside the Box”. Pixel art is something that I’ve never really tried, and so I am not very good at it; however, it is a style I’ve been wanting to experiment with for a while. Additionally, many of my students like this style. I chose this session in the hope that I would gain insight and tips to bring back to my class.
When I entered the session, I didn’t pay attention to who the speaker was. I was expecting a hot-shot young indie, and what I got instead was an industry veteran who helped pioneer quality game art in an era where “pixel art” wasn’t a style, but a technical limitation. In his talk, Mark details the process used “back in the day”, and his process for creating retro pixel art now. There is a lot of value in understanding how and why art was created in the fashion it was. That knowledge helps to understand what we’re trying to emulate, and what is worth emulating for modern pixel art. In this post I’ll highlight a couple interesting points on the history of game art, then I’ll move on to a couple of modern techniques/programs that Mark demonstrated the use of. The tidbits I gathered from this talk already have me creating better pixel art than any previous attempt I’ve made. I hope you’ll find some use for this information as well.
At the bottom of this article, I have compiled a list of links that should help to branch out and find more information on the topic.
EGA Graphics, EA Deluxe Paint, Palette Swapping | a.k.a “The Good ‘ol Days”
Enhanced Graphics Adapter (EGA)
EGA was once the standard for color display on computer monitors. It offered 16 beautiful hardly palatable colors for display. Why were a particular 16 colors chosen for EGA? It turns out that the reason is because it was the job of programmers to select colors. At that time, colors had to be referenced by a numerical value. Programmers tend to like keeping things as simple as possible. The colors were chosen because they could be represented by integers, as appose to floats or doubles (decimal values).
The limited and “fuggly” color pallet is part of what helped Mark get started in game art design. Mr. Ferrari said that when Lucas Arts first approached him he felt he wasn’t right for the job, because he didn’t know much about computers. Lucas Arts said that was okay, because it’s much easier to teach an artist to work with a computer, than to teach a programmer to become an artist. Hiring Mark was a long term investment in improving the quality of visuals within the current technical limitations.
Once upon a time, Adobe was not the industry standard for digital imaging. In the prime days of DOS (early 90’s) it was Electronic Arts that offered the best tools for game art creation. The software they published was Deluxe Paint and Deluxe Paint Animation. In many ways the features of Deluxe Paint would still be better for creating low color/pixel graphics than Photoshop. Deluxe Paint paint is of course long dead. There are a number of reasons why DP isn’t a good choice for modern workflows, but there are specialized alternatives to Photoshop available. More on that later.
During Mark’s talk he used DosBox to run a copy of Deluxe Paint. Some of his original work that he used for illustration was only available in DP.
In addition to low color pallets, another technical limitation that once existed was the lack of transparency. There were no alpha channels. Graphics were all some form of a bitmap. In the time of EGA, Mark brought about higher quality visuals by introducing dithering to simulate the look of shadow, and depth-of-field. Eventually EGA graphics gave way to VGA, and there was now 256 colors to work with. This is where the dithering technique really began to shine, and it’s where Mark once again stretched the technical limitations with a technique to fake transparency called palette swapping.
Palette swapping is one of the most brilliant technical tricks I’ve seen an artist pull off. Here is how it works. When the color limitation increased from 16 to 256, certain amounts of color could be divided out for different uses. I don’t remember Mark’s exact numbers, so I’m just making them up for the purpose of illustration. An artist could choose work an image within less than 256 colors, such as 128, or 196, or whatever. The remaining color allotment is reserved for creating very subtle color changes in the image. These changes are so subtle that at a casual glance they won’t be noticed; however, when the color pallet in use with the image is swapped out for another, the new pallet has one or more of the subtle values changed to something much stronger. This is best seen to be understood.
Mark’s image that best shows this effect is his snow falling in the woods scene. If you look very closely at the image, you will notice vertical trails. The pixels in these trails are a slightly different color, but when the pallet shifts, they become white. When the pallet shift is in action, it looks like snow is falling, and one would believe there is transparency here, but it’s an illusion.
Mark Ferrari has been contracted to create the artwork for the upcoming game “Thimbleweed Park”. The aesthetic goal of this game is to feel like an old school Lucas Arts point and click adventure. The key point here is that it should “feel” like the old games, not actually be an old game. That is an important distinction. The team believes that their players want to relive the feeling of those games; however, if one were to actually load one in something like DosBox, most individuals would be disgusted. What people love is the memory, or nostalgia of the old games, but when faced with the actual product, it’s typically a visual turn-off. This is especially true on modern high definition screens. When these old games were made the average computer monitor was a CRT Tube technology with 640 x 480 (VGA) resolution. Not only are the pixels big, but they bleed together. In that time, artists actually used the lower quality of displays to help convey visual effects in their artwork, and obscure necessary defects.
Mark eloquently put it like this, when people go to a renaissance fair they’re looking to have an enjoyable time. They want the costumes, the events, and the food. No one actually wants be in the Renaissance where crap flowed through the streets, death and disease were all around, the smell would have been unbearable to modern man. Folks just want to experience what they perceive to be enjoyable about the time period. It’s the same with modern retro games. People want a renaissance fair, not the actual Renaissance.
Here are some of the ideas Mr. Ferrari presented from his current workflow that help to inspire nostalgia, without the reality of old graphics.
Let’s quickly define dithering, just in case we’re unfamiliar. Dither is the blending of colors through the application of noise between 2 or more colors. In a smooth transition, an image displays any number of colors creating a gradient. With dither, the colored pixels of two or more colors become intermingled. This intermingling of pixels is done through the application of noise patterns.
It is recommend to find or create ones own dithering patterns. This can be done in a number of ways, but the most precise is to create it oneself, pixel by pixel. It’s a lot of work up front, but once you’ve established your library of dithering patterns, future workflows will be fast and smooth.
Color banding occurs one 2 colors have a hard transition, or no transition. A good example is a skyline. The sky may be light blue near the ground, and a deep blue higher up. if one only has 3 shades of blue to work with, the sky will have three bands going from light to dark. Dithering is used to intermingle the pixels along the edge, softening the transition. In older games that had limited color pallets, there was typically some amount of banding that could be noticed. While dithering eases the banding, too much dithering can result in a speckled mess of pixels.
Getting the best quality graphics in old games came down to balancing out color banding and dithering. Knowing that both would be noticeable, it wasn’t so much about eliminating them, as it was finding the balance that looked most ascetically appealing. This is where the eye of an artist was most needed.
Create reusable dithering gradient patterns that can be applied quickly. As an example, Photoshop allows selections to be saved as fill patterns; so, create low index color gradients that will be used often, such as tree trunks (gradient going from the lit to the unlit side), and save as a pattern by selecting the low color gradient and choosing “edit > define pattern”, then use the pattern to fill areas that will become trees.
Understand Light in the Real World
Objects are not only lit by direct light sources, they are also lit by indirect sources. As an example, A tree is lit by the sun on one side; however, the opposite shadow casting side isn’t lit by the sun, nor is it black. It is illuminated by other objects that bounce light back. In this scenario, that would most commonly be the sky. When shading non directly lit portions of imagery, sample color values from other elements that would bounce light back. The background, or sky is typically a good source.
To assist in understanding where the indirect light is coming from, it is best to create backgrounds first. In doing this, one will have a better understanding of what colors to sample when highlighting and blending with foreground objects. This is the same concept as used in any painting art form. A few years ago I took some oil painting lessons. This was the first time I was introduced to the idea of blending foreground colors with the background. In traditional painting, subjects one paints on top of the background will take on color properties of the background. This all assists in making an image look natural and appealing.
Limited (but not really) Color Pallets
At one time artists could only use 16 colors. Then they could only use 256. When creating retro art, enjoy the freedom of using how ever many colors are necessary to create something that is visually appealing; however, if creating something that is suppose feel nostalgic remember that a limited color pallet, or a at least the illusion of limitation is a needed component. In Mark’s session he mentioned that he wasn’t exactly sure how many colors were being used in each of the new “Thimbleweed Park” images,. One was around 60 and another was over 500.
My take away here is that a retro pixel artist can enjoy the freedom of using as many colors as necessary, with a mindset towards less. If one feels more colors are needed that is fine, but perhaps one should think about using more pronounced color banding and dithering to create the feel that there are less.
Decide for yourself what the limitation should be, and use that as your guide, not so much a hard and fast rule. Whether it’s modern or retro video game art, at the end of the day the goal is to create something that the players will want to look at. Again, make a renaissance fair, not the actual Renaissance.
Software Used by Mark in This Session
Photoshop – Adobe is the current standard, but there are some particular things that need to be given attention. Photoshop’s default setup is for modern high quality art. In order to create retro pixel art, keep in mind the following:
In preferences, make sure resizing is set to “nearest neighbor”.
Turn off all options for anti aliasing (depending on the tool in use, this is often found as a checkbox in the current tools settings)
Do your own anti aliasing to soften jagged edges where appropriate.
Following these guidelines should prevent Photoshop from trying to do it’s own re-sizing, and re-sampling of pixels (which looks nasty when size skew or enlarge).
Pro Motion by Cosmigo – Pro Motion is a dedicated pixel art tool. It contains many options geared towards animation, shading, and reuse/tiling of indexed and low pixel art. It is probably the best solution currently available, and the evolution of what EA’s Deluxe Paint would have become. Pro Motion, while not widely known in digital art, is used by some professional studios. Mark used it to create art for games such as Spyro : Eternal Night for GBA. More recently it has been used to create art for games such as Shovel Knight.
I’m not personally skilled in it; so I’ll link to some tutorials I found. I purchased a license after this session, and I’ll be getting to know it in the near(ish) future.
What I gathered from the session largely boils down to this; leave out the old limitations, and create your own. If one’s goal is to create something that feels retro, then use limited color pallets. Strategically place color banding and noticeable dithering to create a retro vibe, but don’t try to make an EGA game. Decide what you want, design limitations that will keep you focused on your visual goal, and hold loosely to those artificial limitations.
There was a lot to digest in this session. I’ve spent some time using the techniques Mark demonstrated, and I’m much happier with my results. I’ll post some of my new images as I finish them. Right now my pixel art is more of an experiment to study. The majority of my development time is going in to finishing the game I’m currently working on; however, you can bet some of this experience will make its way into my final art. There was more information given in this session. I’m hoping it will be in the GDC archive soon, I’d like to revisit it a couple of times.
Thanks for reading, and as promised here is that handy list of resources.
(*Note: this demo is currently down while students make upgrades. It will be back soon. For functional demos, please visit hello.step2digital.com) Once installed on your iPhone or iPad, touch the address bar at the top of the Argon3 Browser, and go to jurassicAR.step2digital.com
Load times for your initial viewing of the AR page will be long. It will likely appear that your camera is freezing, and/or updating slowly. There is about 50MB of data that is being downloaded (keep that in mind if you’re using a phone data plan). After the initial viewing the models will remain, and subsequent viewing should load faster.
With Argon 3 open and the jurassicAR page loaded, view any of the following images to see the AR dinosaurs.
You don’t have to view the images on your monitor. You can print the images and view the dinosaurs where and however you like!
Application Beta Note*As this is a browser based experience utilizing beta software, and coded by entry level students, it is possible that the application will not always work as described. Some inconsistencies may occur. It will likely be an ongoing project that students refine as their skills grow, and future students may port it to Unity3D.
Current AR Enabled Images (print them, or opening in separate tabs can yield better results)
What is Jurassic CART?
The IGD program at CART is unique in that the technical portion of the course is merged with student’s senior year English requirement. This year the student’s read Jurassic Park by Michael Crichton. Jurassic CART is an Augmented Reality experiment created by the students, and is a companion technical project for the book. In addition to the AR Project, a subset of students worked on an interactive 3D simulation of the park using the Unity3D game engine.
What is CART?
CART is a high school in California’s Central Valley. It is jointly owned by the Clovis and Fresno Unified School Districts. CART is not a charter school. CART is for any student who wishes to experience a project based approach to learning. Entrance to CART is based on a lottery system, and not biased to any one type of student. More information about CART and the IGD program can be found at the official CART website.
What Did the Students Do?
For this project students formed groups based on specialties and interests.
Model; Rig; Envelope; Animate; Export in FBX for Unity3D; Export in DAE for JSON conversion; Use Photoshop/Illustrator as needed for supporting imagery;
Vuforia – A 3rd party solution for image target tracking in AR applications
HTML5/XML – formats for display and passing information
Autodesk Maya – Industry standard 3D Modelling/Animation software
What Did the Instructor Do?
With a project like this, it’s easy to wonder how much the teacher “helped”. For the most part the instructor’s role was to simply guide the students through use of the technologies/software, and to assist in brainstorming logic solutions for programmers.
With the following 3 acceptations, everything assembled by the students with instructor guided use of the technologies/software involved.
Three Acceptations ( for those who don’t mind a little tech jargon )
WebSQL – A solution was needed to store more than 5MB of data locally, so that the 50+ MB of models do not need to be downloaded with every use of the browser. The instructor devised a way to store large JSON files as text entries in a local WebSQL database. The code would check the local database, before downloading. If an entry exists, it would use that instead of eating up data transfer. This solution was presented as one of our 3rd party technologies, and the students were instructed in how to implement and modify it for the app.
Extend ThreeJS JSON Load Functionality – The ThreeJS JSON loader only worked with a supplied path to a JSON file; however, our new WebSQL solution supplied the JSON code directly. A new function was needed to load the code directly, and not from a file. This was demoed to the students, and they were instructed in the use of this function to work in the app.
3D Model Optimization – 3D artists Developed high poly models, so of which looked amazing. The created lower poly versions optimized for the game engine; however, it was found late in the project that even their low poly models were too complex to be practical for Web based AR. The instructor assisted in getting the 3D models to optimal detail levels for Web based AR.
What Is Step 2 | Digital Design’s Involvement?
Step 2 | Digital Design is the name that CART IGD instructor Matthew Hodge uses for his work as a digital freelancer/game developer. Step 2 has sponsored this project by providing web space for the online components. Additionally, Step 2 has paid for the cost of the license required by Vuforia, in order to ensure that the image tracking portion of this project remains functional.
Additional Project Credits for CART Product Development & Robotics
For the showcase event on 1/12/2016, physical display components were created. As part of the Product Development and Robotics Courses at CART, students fabricated a model of the island, a mechanical sky cam to view AR on the physical model, and an autonomous vehicle created from Mr. Hodge’s RC Jurassic Park Jeep. Pictures will be posted here after the event.
As the latest 3D projects near their presentable stage, let’s take a look back at the recently found “Lost Portfolio of Hodge”. You know, some of it’s actually pretty good. It wouldn’t be surprising to see this stuff reborn (*cough, “Twitch”, *cough).