Originally posted on 28 November 2014
Source: Siren 2 Maniacs, page 158

Siren 2: Methods of portraying the other world that appears in ours, and the people who become caught up in it

01 - A view of the wire frame for Yamijima Mine's "Hell Steps" portion, used often as a visual image for Siren 2. Production rate is upped by selecting only what is needed from these object and displaying them in layers.
02 & 03 - The process of applying motions to characters. Ikuko and Yuri have slight differences in their running movements.
04 - It is possible to view the videos taken from different camera angles in real time.

The head of each actor acting as model for a character was held in place, and various expressions photographed using a digital video camera. 22 basic expressions, four standby and insertion cuts, and further cuts for the cutscenes were used as textures. Generally, a combination of pasting textures onto the face of a 3D model and moving the bones in the model's jaw area was employed to express a variety of expressions. Also note the blending of multiple textures.

As has already been noted, Siren 2 employs special methods to create the characters, but aside from this there were also various methods used to create the game, such as motion capture and facial animation. Take the huge concrete labyrinth of Yamijima, for example. In order to manage each level, with a single map comprising roughly 3000 objects, a layer function was used. Depending on the location, objects that make up the stage are grouped and divided between more than 130 layers, created so that only the required parts are displayed. Also, with regards to the characters' movements displayed using motion capture techniques, multiple characters of varying build were assigned the same motions (photo 2, above) and layers were used to give a distinction between the shared basic movements of the characters (photo 3). This allowed them to efficiently recreate the unique, smooth movements of human beings, as well as show a wide variety of variations. Furthermore, the decision as to which angle each scene should be shown from so as to make it more dramatic is normally something that relies greatly on artistic instincts, but by using Siren 2's "camera switcher" feature they could view videos in real time from all different camera angles and continue editing that way (photo 4). This process is possible because everything can be managed in real time, not only movements but also facial expressions.

The facial expressions, directly showing the characters' emotions, are created using textures - 22 cuts for "basic expressions", such as surprise and laughter, as well as standby and inserts (2 cuts each), for a total of 26 + α. These textures are always blended together, completed by expressing the irregularity of the expressions. The combination of the textures pasted onto the face of the 3D model and the movement of the jaw bones creates a variety of expressions.