About Pixotope operation
About Pixotope operation
Start Pixotope from the icon in the center of the screen
Select a project on the startup screen (START). If the project has not been loaded into Pixotope, or if you are creating a new one, select “Create New Show” in the upper right corner of the screen.
A new project can be generated from a template. If an existing project exists, specify its file path (.uproject). This project can also be generated with UE4, and if it is read in Pixotope, it is automatically compatible.
After selecting a project, go to the SETUP screen. Levels existing within the project are displayed as a list here, so select the one you want to start. There are two modes: “Editor Mode” and “LIVE Mode.” Editor mode is mainly activated for editing and testing purposes. In the case of actual shooting, it starts up in LIVE mode with a light load.
Since the project file itself has a structure similar to normal UE4, a method conforming to UE4 operation is fine.
How to update and manage file difference information. However, since levels and blueprints are hashed, partial differential updates of files are not possible. Therefore, if multiple people update the same file, a conflict will occur, and in that case, note that they can only overwrite it by changing one or the other.
If you copy the project folder as it is, you can save it as a full backup. There is no major disadvantage, but the internal assets themselves are duplicated, so be careful about the file size.
The sensor mounted on the FX6 is a 36mm x 24mm sensor called the so-called “35mm full size,” and the ratio is 3:2.
Open camera tracking in the configure section of setup and open [Camera and Lens] in it. There is a [Filmback] section/in it, so set the sensor size here.
What you need to be careful about is the width and height settings. The value input here is not the physical sensor size, and the sensor area to be used is inputted according to the ratio of the image (additional verification required). For example, when shooting a video at a ratio of 16:9, the sensor area used is 36 mm x 20.25 mm according to 16:9, so these values are set to width and height. Note that the photosensitized information is likely to be divided according to resolution, so the only thing to be concerned about here is the ratio.
Basically, 1080p (1920 x 1080) is used as the base. If SDI output is compatible, 2160p (3840 x 2160) can also be used.
The camera side is also set according to the frame rate of the composite image. Basically, it uses 30 fps or 60 fps. Note, however, that the actual fps on the camera side is different from what is shown, and in the case of 30 fps, it is 29.97 fps, and in the case of 60 fps, it is 59.94 fps.
The input/output settings on Pixotope are set in Video I/O on the setup screen.
The video frames of RedSpy and physical cameras are updated at different timings. By setting a delay for each with PIXOTOPE, the update timing can be adjusted, and the integrity of the composite (final output) image can be adjusted.
However, each delay is not constant, and the update timing is dynamically misaligned due to the effects of processing load. In particular, when a temporary high load occurs on PixoTop due to the loading of levels or assets, etc., and a freeze occurs, the gap between live action and virtual video changes at that timing. Therefore, it is necessary to keep the drawing load as constant as possible, and to construct and set the drawing FPS of Pixotope so that it always falls below the FPS of live action and RedSpy.
In FX6, TC is generated, and TC can be referred to Pixotope via the video terminal of the capture board along with the image. By synchronizing with TC as a base, misalignment between the camera and CG images can generally be suppressed. On the camera side, two TC modes, DF (Drop Frame) and NDF (Non-Drop Frame), can be selected. The latter NDF is usually used.
When it was set to DF, the frames seemed to be completely synchronized at the timing when TC was reset, but it was seen that there was a large gap in proportion to the passage of time. Perhaps this is due to the fact that the camera side is 29.97 fps or 59.94 fps, while the Pixotope side has a difference of 30.0 fps or 60.0 fps, and the gap between the camera and Pixotope frames is getting bigger and bigger by intentionally scraping the frame on the camera side. In the case of NDF, the gap did not seem to change regardless of the time elapsed.
Furthermore, it is presumed that the slight misalignment that occurs even with NDF is due to not applying Genlock. It seems that a perfect match will be possible with TC+ GenLock. If this is to be achieved, a physical camera having both terminals is required, and since the FX6 does not have a GenLock terminal, FX9 is required.
Fine frame misalignment can be adjusted on Pixotope. With this, it is possible to shoot in an almost consistent state.
When GenLock is used using SyncGenerator, frame synchronization is performed more accurately. RedSpy has an input terminal, but in the case of a physical camera, it may not be compatible depending on the model. The Sony FX6 has no input terminals.
In the case of PIXOTOPE, input to the reference terminal of the capture board. Furthermore, if settings are performed on the Dashboard, frame update timing can be synchronized.
When a shader including transparency comes in front of the mask, a live action image through the mask may not be drawn.
The processing load is lighter in Live mode than in Edit mode, and noise such as frame drops is less likely to occur. Use Edit mode when making adjustments while checking images, and Live mode for main shooting.
Since the PC that runs PIXOTOPE reduces the load as much as possible, video capture and distribution are performed on separate devices. Video capture transmits images to a monitor equipped with a recorder such as Blackmagic Video Assist, another PC loaded with OBS, etc.
Blackmagic Video Assist
Since the recording processing load varies depending on the codec, use one that the capture device can withstand. On the other hand, image quality is also affected, so if a codec is specified, use a capture device with high performance.
When performing color correction with PIXOTOPE, the color characteristics of the display are affected, so set using a color management display. The color format is described later.
Basically, Rec. 709, which is compatible with general-purpose displays, is set as the color space of the color management display according to the user's device that will see the deliverables. On the other hand, in the case of projecting deliverables in a viewing space such as a movie theater, for example, if the characteristics of the display to be used are clear, it is also possible to perform color correction by setting **HDR (Rec. 2020, etc.) **. However, when the format is changed, the impact is significant, so be sure to check before implementing color correction.
A thorough explanation of HDR that you can understand very well! Differences in gamma curves | EIZO CORPORATION
I don't know the details of internal processing, but it is a function that automatically sets the base for keying based on camera images.
Setting Up A Chroma Key Material in UE4
Basically, it is the same as the adjustment parameters in general chroma key synthesis processing.
[Video related]
TC: TimeCode
GenLock
syncGenerator
color format
Color Correction/Color Grading
post-process
SDI
color monitor
cinema camera
FPS: Frame Per Second
sensor size
[Engineering/VFX related]
In-camera VFX
TA: Technical Artist
LED wall
Inner Frustam/ Outer Frustum
Pixotope
UE4: Unreal Engine 4
Unity
Virtual production
RedSpy
keying
Pre-rendering/Realtime Rendering
TouchDesigner
HTC VIVE Tracker
Volumetric capture
reality
Photogrammetry
Houdini
[Unreal Engine related]
Pixotope
Megascans
Live link
Niagara
blueprint
nDisplay
[Lighting-related]
DMX
Art-Net
sACN
moving lights