🇬🇧 English
🇬🇧 English
Appearance
🇬🇧 English
🇬🇧 English
Appearance
This page is written for version:
1.21.8
PREREQUISITES
Make sure you've read Rendering Concepts first. This page builds on those concepts and discusses how to render objects in the world.
This page explores some more modern rendering concepts. You'll learn more about the two split phases of rendering: "extraction" (or "preparation") and "drawing" (or "rendering"). In this guide, we will refer to the "extraction/preparation" phase as the "extraction" phase and the "drawing/rendering" phase as the "drawing" phase.
To render custom objects in the world, you have two choices. You can inject into existing vanilla rendering and add your code, but that limits you to existing vanilla render pipelines. If existing vanilla render pipelines don't suit your needs, you need a custom render pipeline.
Before we get into custom render pipelines, let's look at vanilla rendering.
As mentioned in Rendering Concepts, recent Minecraft updates are working on splitting rendering into two phases: "extraction" and "drawing".
All data needed for rendering is collected during the "extraction" phase. This includes, for example, writing to the buffered builder. Calling a render method, such as VertexRedering.drawFilledBox
, writes vertices to the buffered builder, and is part of the "extraction" phase. Note that even though many methods are prefixed with draw
or render
, they should be called during the "extraction" phase. You should add all elements you want to render during this phase.
When the "extraction" phase is done, the "drawing" phase starts, and the buffered builder is built. During this phase, the buffered builder is drawn to the screen. The ultimate goal of this "extraction" and "drawing" split is to allow for drawing the previous frame in parallel to extracting the next frame, improving performance.
Now, with these two phases in mind, let's look at how to create a custom render pipeline.
Let's say we want to render waypoints, which should appear through walls. The closest vanilla pipeline for that would be RenderPipelines#DEBUG_FILLED_BOX
, but it doesn't render through walls, so we will need a custom render pipeline.
We define a custom render pipeline in a class:
private static final RenderPipeline FILLED_THROUGH_WALLS = RenderPipelines.register(RenderPipeline.builder(RenderPipelines.POSITION_COLOR_SNIPPET)
.withLocation(Identifier.of(FabricDocsReference.MOD_ID, "pipeline/debug_filled_box_through_walls"))
.withVertexFormat(VertexFormats.POSITION_COLOR, VertexFormat.DrawMode.TRIANGLE_STRIP)
.withDepthTestFunction(DepthTestFunction.NO_DEPTH_TEST)
.build()
);
We first implement the "extraction" phase. We can call this method during the "extraction" phase to add a waypoint to be rendered.
private static final BufferAllocator allocator = new BufferAllocator(RenderLayer.CUTOUT_BUFFER_SIZE);
private BufferBuilder buffer;
private void renderWaypoint(WorldRenderContext context) {
MatrixStack matrices = context.matrixStack();
Vec3d camera = context.camera().getPos();
assert matrices != null;
matrices.push();
matrices.translate(-camera.x, -camera.y, -camera.z);
if (buffer == null) {
buffer = new BufferBuilder(allocator, FILLED_THROUGH_WALLS.getVertexFormatMode(), FILLED_THROUGH_WALLS.getVertexFormat());
}
VertexRendering.drawFilledBox(matrices, buffer, 0f, 100f, 0f, 1f, 101f, 1f, 0f, 1f, 0f, 0.5f);
matrices.pop();
}
Note that the size used in the BufferAllocator
constructor depends on the render pipeline you are using. In our case, it is RenderLayer.CUTOUT_BUFFER_SIZE
.
If you want to render multiple waypoints, call this method multiple times. Make sure you do so during the "extraction" phase, BEFORE the "drawing" phase starts, at which point the buffer builder is built.
Note that in the above code we are saving the BufferBuilder
in a field. This is because we need it in the "drawing" phase. In this case, the BufferBuilder
is our "render state" or "extracted data". If you need additional data during the "drawing" phase, you should create a custom render state class to hold the BufferedBuilder
and any additional rendering data you need.
Now we'll implement the "drawing" phase. This should be called after all waypoints you want to render have been added to the BufferBuilder
during the "extraction" phase.
private static final Vector4f COLOR_MODULATOR = new Vector4f(1f, 1f, 1f, 1f);
private MappableRingBuffer vertexBuffer;
private void drawFilledThroughWalls(MinecraftClient client, @SuppressWarnings("SameParameterValue") RenderPipeline pipeline) {
// Build the buffer
BuiltBuffer builtBuffer = buffer.end();
BuiltBuffer.DrawParameters drawParameters = builtBuffer.getDrawParameters();
VertexFormat format = drawParameters.format();
GpuBuffer vertices = upload(drawParameters, format, builtBuffer);
draw(client, pipeline, builtBuffer, drawParameters, vertices, format);
// Rotate the vertex buffer so we are less likely to use buffers that the GPU is using
vertexBuffer.rotate();
buffer = null;
}
private GpuBuffer upload(BuiltBuffer.DrawParameters drawParameters, VertexFormat format, BuiltBuffer builtBuffer) {
// Calculate the size needed for the vertex buffer
int vertexBufferSize = drawParameters.vertexCount() * format.getVertexSize();
// Initialize or resize the vertex buffer as needed
if (vertexBuffer == null || vertexBuffer.size() < vertexBufferSize) {
vertexBuffer = new MappableRingBuffer(() -> FabricDocsReference.MOD_ID + " example render pipeline", GpuBuffer.USAGE_VERTEX | GpuBuffer.USAGE_MAP_WRITE, vertexBufferSize);
}
// Copy vertex data into the vertex buffer
CommandEncoder commandEncoder = RenderSystem.getDevice().createCommandEncoder();
try (GpuBuffer.MappedView mappedView = commandEncoder.mapBuffer(vertexBuffer.getBlocking().slice(0, builtBuffer.getBuffer().remaining()), false, true)) {
MemoryUtil.memCopy(builtBuffer.getBuffer(), mappedView.data());
}
return vertexBuffer.getBlocking();
}
private static void draw(MinecraftClient client, RenderPipeline pipeline, BuiltBuffer builtBuffer, BuiltBuffer.DrawParameters drawParameters, GpuBuffer vertices, VertexFormat format) {
GpuBuffer indices;
VertexFormat.IndexType indexType;
if (pipeline.getVertexFormatMode() == VertexFormat.DrawMode.QUADS) {
// Sort the quads if there is translucency
builtBuffer.sortQuads(allocator, RenderSystem.getProjectionType().getVertexSorter());
// Upload the index buffer
indices = pipeline.getVertexFormat().uploadImmediateIndexBuffer(builtBuffer.getSortedBuffer());
indexType = builtBuffer.getDrawParameters().indexType();
} else {
// Use the general shape index buffer for non-quad draw modes
RenderSystem.ShapeIndexBuffer shapeIndexBuffer = RenderSystem.getSequentialBuffer(pipeline.getVertexFormatMode());
indices = shapeIndexBuffer.getIndexBuffer(drawParameters.indexCount());
indexType = shapeIndexBuffer.getIndexType();
}
// Actually execute the draw
GpuBufferSlice dynamicTransforms = RenderSystem.getDynamicUniforms()
.write(RenderSystem.getModelViewMatrix(), COLOR_MODULATOR, RenderSystem.getModelOffset(), RenderSystem.getTextureMatrix(), 1f);
try (RenderPass renderPass = RenderSystem.getDevice()
.createCommandEncoder()
.createRenderPass(() -> FabricDocsReference.MOD_ID + " example render pipeline rendering", client.getFramebuffer().getColorAttachmentView(), OptionalInt.empty(), client.getFramebuffer().getDepthAttachmentView(), OptionalDouble.empty())) {
renderPass.setPipeline(pipeline);
RenderSystem.bindDefaultUniforms(renderPass);
renderPass.setUniform("DynamicTransforms", dynamicTransforms);
// Bind texture if applicable:
// Sampler0 is used for texture inputs in vertices
// renderPass.bindSampler("Sampler0", textureView);
renderPass.setVertexBuffer(0, vertices);
renderPass.setIndexBuffer(indices, indexType);
// The base vertex is the starting index when we copied the data into the vertex buffer divided by vertex size
//noinspection ConstantValue
renderPass.drawIndexed(0 / format.getVertexSize(), 0, drawParameters.indexCount(), 1);
}
builtBuffer.close();
}
Finally, we need to clean up resources when the game renderer is closed. GameRenderer#close
should call this method, and for that you currently need to inject into GameRenderer#close
with a mixin.
public void close() {
allocator.close();
if (vertexBuffer != null) {
vertexBuffer.close();
vertexBuffer = null;
}
}
package com.example.docs.mixin.client;
import org.spongepowered.asm.mixin.Mixin;
import org.spongepowered.asm.mixin.injection.At;
import org.spongepowered.asm.mixin.injection.Inject;
import org.spongepowered.asm.mixin.injection.callback.CallbackInfo;
import net.minecraft.client.render.GameRenderer;
import com.example.docs.rendering.CustomRenderPipeline;
@Mixin(GameRenderer.class)
public class GameRendererMixin {
@Inject(method = "close", at = @At("RETURN"))
private void onGameRendererClose(CallbackInfo ci) {
CustomRenderPipeline.getInstance().close();
}
}
Combining all the steps from above, we get a simple class that renders a waypoint at (0, 100, 0)
through walls.
package com.example.docs.rendering;
import java.util.OptionalDouble;
import java.util.OptionalInt;
import com.mojang.blaze3d.buffers.GpuBuffer;
import com.mojang.blaze3d.buffers.GpuBufferSlice;
import com.mojang.blaze3d.pipeline.RenderPipeline;
import com.mojang.blaze3d.platform.DepthTestFunction;
import com.mojang.blaze3d.systems.CommandEncoder;
import com.mojang.blaze3d.systems.RenderPass;
import com.mojang.blaze3d.systems.RenderSystem;
import com.mojang.blaze3d.vertex.VertexFormat;
import org.joml.Vector4f;
import org.lwjgl.system.MemoryUtil;
import net.minecraft.client.MinecraftClient;
import net.minecraft.client.gl.MappableRingBuffer;
import net.minecraft.client.gl.RenderPipelines;
import net.minecraft.client.render.BufferBuilder;
import net.minecraft.client.render.BuiltBuffer;
import net.minecraft.client.render.RenderLayer;
import net.minecraft.client.render.VertexFormats;
import net.minecraft.client.render.VertexRendering;
import net.minecraft.client.util.BufferAllocator;
import net.minecraft.client.util.math.MatrixStack;
import net.minecraft.util.Identifier;
import net.minecraft.util.math.Vec3d;
import net.fabricmc.api.ClientModInitializer;
import net.fabricmc.fabric.api.client.rendering.v1.WorldRenderContext;
import net.fabricmc.fabric.api.client.rendering.v1.WorldRenderEvents;
import com.example.docs.FabricDocsReference;
public class CustomRenderPipeline implements ClientModInitializer {
private static CustomRenderPipeline instance;
// :::custom-pipelines:define-pipeline
private static final RenderPipeline FILLED_THROUGH_WALLS = RenderPipelines.register(RenderPipeline.builder(RenderPipelines.POSITION_COLOR_SNIPPET)
.withLocation(Identifier.of(FabricDocsReference.MOD_ID, "pipeline/debug_filled_box_through_walls"))
.withVertexFormat(VertexFormats.POSITION_COLOR, VertexFormat.DrawMode.TRIANGLE_STRIP)
.withDepthTestFunction(DepthTestFunction.NO_DEPTH_TEST)
.build()
);
// :::custom-pipelines:define-pipeline
// :::custom-pipelines:extraction-phase
private static final BufferAllocator allocator = new BufferAllocator(RenderLayer.CUTOUT_BUFFER_SIZE);
private BufferBuilder buffer;
// :::custom-pipelines:extraction-phase
// :::custom-pipelines:drawing-phase
private static final Vector4f COLOR_MODULATOR = new Vector4f(1f, 1f, 1f, 1f);
private MappableRingBuffer vertexBuffer;
// :::custom-pipelines:drawing-phase
public static CustomRenderPipeline getInstance() {
return instance;
}
@Override
public void onInitializeClient() {
instance = this;
WorldRenderEvents.AFTER_TRANSLUCENT.register(this::extractAndDrawWaypoint);
}
private void extractAndDrawWaypoint(WorldRenderContext context) {
renderWaypoint(context);
drawFilledThroughWalls(MinecraftClient.getInstance(), FILLED_THROUGH_WALLS);
}
// :::custom-pipelines:extraction-phase
private void renderWaypoint(WorldRenderContext context) {
MatrixStack matrices = context.matrixStack();
Vec3d camera = context.camera().getPos();
assert matrices != null;
matrices.push();
matrices.translate(-camera.x, -camera.y, -camera.z);
if (buffer == null) {
buffer = new BufferBuilder(allocator, FILLED_THROUGH_WALLS.getVertexFormatMode(), FILLED_THROUGH_WALLS.getVertexFormat());
}
VertexRendering.drawFilledBox(matrices, buffer, 0f, 100f, 0f, 1f, 101f, 1f, 0f, 1f, 0f, 0.5f);
matrices.pop();
}
// :::custom-pipelines:extraction-phase
// :::custom-pipelines:drawing-phase
private void drawFilledThroughWalls(MinecraftClient client, @SuppressWarnings("SameParameterValue") RenderPipeline pipeline) {
// Build the buffer
BuiltBuffer builtBuffer = buffer.end();
BuiltBuffer.DrawParameters drawParameters = builtBuffer.getDrawParameters();
VertexFormat format = drawParameters.format();
GpuBuffer vertices = upload(drawParameters, format, builtBuffer);
draw(client, pipeline, builtBuffer, drawParameters, vertices, format);
// Rotate the vertex buffer so we are less likely to use buffers that the GPU is using
vertexBuffer.rotate();
buffer = null;
}
private GpuBuffer upload(BuiltBuffer.DrawParameters drawParameters, VertexFormat format, BuiltBuffer builtBuffer) {
// Calculate the size needed for the vertex buffer
int vertexBufferSize = drawParameters.vertexCount() * format.getVertexSize();
// Initialize or resize the vertex buffer as needed
if (vertexBuffer == null || vertexBuffer.size() < vertexBufferSize) {
vertexBuffer = new MappableRingBuffer(() -> FabricDocsReference.MOD_ID + " example render pipeline", GpuBuffer.USAGE_VERTEX | GpuBuffer.USAGE_MAP_WRITE, vertexBufferSize);
}
// Copy vertex data into the vertex buffer
CommandEncoder commandEncoder = RenderSystem.getDevice().createCommandEncoder();
try (GpuBuffer.MappedView mappedView = commandEncoder.mapBuffer(vertexBuffer.getBlocking().slice(0, builtBuffer.getBuffer().remaining()), false, true)) {
MemoryUtil.memCopy(builtBuffer.getBuffer(), mappedView.data());
}
return vertexBuffer.getBlocking();
}
private static void draw(MinecraftClient client, RenderPipeline pipeline, BuiltBuffer builtBuffer, BuiltBuffer.DrawParameters drawParameters, GpuBuffer vertices, VertexFormat format) {
GpuBuffer indices;
VertexFormat.IndexType indexType;
if (pipeline.getVertexFormatMode() == VertexFormat.DrawMode.QUADS) {
// Sort the quads if there is translucency
builtBuffer.sortQuads(allocator, RenderSystem.getProjectionType().getVertexSorter());
// Upload the index buffer
indices = pipeline.getVertexFormat().uploadImmediateIndexBuffer(builtBuffer.getSortedBuffer());
indexType = builtBuffer.getDrawParameters().indexType();
} else {
// Use the general shape index buffer for non-quad draw modes
RenderSystem.ShapeIndexBuffer shapeIndexBuffer = RenderSystem.getSequentialBuffer(pipeline.getVertexFormatMode());
indices = shapeIndexBuffer.getIndexBuffer(drawParameters.indexCount());
indexType = shapeIndexBuffer.getIndexType();
}
// Actually execute the draw
GpuBufferSlice dynamicTransforms = RenderSystem.getDynamicUniforms()
.write(RenderSystem.getModelViewMatrix(), COLOR_MODULATOR, RenderSystem.getModelOffset(), RenderSystem.getTextureMatrix(), 1f);
try (RenderPass renderPass = RenderSystem.getDevice()
.createCommandEncoder()
.createRenderPass(() -> FabricDocsReference.MOD_ID + " example render pipeline rendering", client.getFramebuffer().getColorAttachmentView(), OptionalInt.empty(), client.getFramebuffer().getDepthAttachmentView(), OptionalDouble.empty())) {
renderPass.setPipeline(pipeline);
RenderSystem.bindDefaultUniforms(renderPass);
renderPass.setUniform("DynamicTransforms", dynamicTransforms);
// Bind texture if applicable:
// Sampler0 is used for texture inputs in vertices
// renderPass.bindSampler("Sampler0", textureView);
renderPass.setVertexBuffer(0, vertices);
renderPass.setIndexBuffer(indices, indexType);
// The base vertex is the starting index when we copied the data into the vertex buffer divided by vertex size
//noinspection ConstantValue
renderPass.drawIndexed(0 / format.getVertexSize(), 0, drawParameters.indexCount(), 1);
}
builtBuffer.close();
}
// :::custom-pipelines:drawing-phase
// :::custom-pipelines:clean-up
public void close() {
allocator.close();
if (vertexBuffer != null) {
vertexBuffer.close();
vertexBuffer = null;
}
}
// :::custom-pipelines:clean-up
}
Don't forget the GameRendererMixin
as well! Here is the result: