In the first article, we explored how Vsync “wakes up” the framework and where the animation path begins. We reached the point where rendering instructions are prepared but not yet drawn.

Now we move on to the most resource-intensive stage — turning widgets into a real image on the screen. This is a path through three key phases: Layout, Paint, and Compositing.

PipelineOwner

If BuildOwner is responsible for construction logic, then PipelineOwner is responsible for visual representation. This is the object that manages the RenderObject tree. At this level, Flutter solves two main tasks: “where is the object?” (Layout) and “what does it look like?” (Paint).

The process starts with the RendererBinding.drawFrame method. Unlike WidgetsBinding, which deals with widget building, RendererBinding works directly with geometry.

PipelineOwner Stage

Method

Result

Layout

flushLayout()

Determining sizes and positions (Constraints -> Geometry)

Compositing Bits

flushCompositingBits()

Updating layer information

Paint

flushPaint()

Recording drawing commands into Picture

Semantics

flushSemantics()

Preparing accessibility data

Diving into Layout

In the flushLayout() method, Flutter traverses all RenderObjects marked as needsLayout. Flutter uses a single-pass algorithm: the parent passes constraints down, and the child returns its size upward.

void flushLayout() {
  if (!kReleaseMode) {
    FlutterTimeline.startSync('LAYOUT');
  }
  try {
    while (_nodesNeedingLayout.isNotEmpty) {
      final List<RenderObject> dirtyNodes = _nodesNeedingLayout;
      _nodesNeedingLayout = <RenderObject>;
      for (final RenderObject node in dirtyNodes..sort((RenderObject a, RenderObject b) => a.depth - b.depth)) {
        if (node._needsLayout && node.owner == this)
          node._layoutWithoutResize();
      }
    }
  } finally {
    if (!kReleaseMode) {
      FlutterTimeline.finishSync();
    }
  }
}

Source: flutter/lib/src/rendering/object.dart

Compositing Bits: preparing the layer map

In the pipeline stages table, the second row is flushCompositingBits().

At this stage, Flutter traverses the tree and updates needsCompositing flags. The system must know in advance which objects require their own layer (for example, due to opacity or clipping via ClipRect) and which can be drawn on a shared canvas.

Paint: recording intent, not pixels

Once geometry is defined, the flushPaint() phase begins. One of the biggest misconceptions lies here: many think Flutter draws pixels at this stage. In reality, Flutter records commands.

Each RenderObject receives a PaintingContext, which contains a Canvas. When the paint() method is called, commands (e.g., drawRect, drawCircle) are recorded into a PictureRecorder.

void paint(PaintingContext context, Offset offset) {
  // Real RenderObject uses context.canvas to record commands
  final Canvas canvas = context.canvas;
  canvas.drawRect(offset & size, Paint()..color = Colors.blue);
}

Source: flutter/lib/src/rendering/object.dart

The result of this phase is a ui.Picture object — an immutable list of graphical commands. This list is extremely cheap to create, allowing Flutter to generate it for every animation frame without significant overhead.

Role of layers (Layer Tree)

If the animation is complex, Flutter does not draw everything on a single canvas. It builds a Layer Tree.

Using OpacityLayer, TransformLayer, or ClipRectLayer allows the engine to manipulate entire chunks of the image without repainting them. When you change opacity via FadeTransition, Flutter does not call paint() on all children; it simply updates the alpha property of the corresponding OpacityLayer.

Compositing: assembling the scene for the Engine

The final step in the framework is compositing. The layer tree must be converted into a Scene that the engine understands.

This is handled by SceneBuilder. In renderView.compositeFrame(), the framework traverses the layer tree and calls addToScene().

protected void drawFrame() {
  rootPipelineOwner.flushLayout();
  rootPipelineOwner.flushCompositingBits();
  rootPipelineOwner.flushPaint();
  if (sendFramesToEngine) {
    for (final RenderView renderView in renderViews) {
      renderView.compositeFrame(); // Bits are sent to the GPU here
    }
    rootPipelineOwner.flushSemantics();
    _firstFrameSent = true;
  }
}

Source: flutter/lib/src/rendering/binding.dart

As soon as view.render(scene) is called, the UI thread is freed. It has done its job: transformed animation state into a data structure. Responsibility now shifts to the Raster Thread.

Rasterization

The Flutter engine receives the ui.Scene and must convert drawing commands into signals for the GPU. This is where rasterization happens — transforming abstract commands into concrete GPU instructions.

Skia vs Impeller: the battle for smoothness

For a long time, Flutter used Skia as its primary graphics engine. Skia is powerful, but it had a problem: JIT shader compilation.

When an animation first used a complex effect (such as blur or a specific gradient), Skia generated shader code during frame rendering. This could take 20–50 ms, causing noticeable stutters (shader compilation jank).

Since 2023, Flutter has been actively introducing Impeller, a new renderer designed specifically for the framework.

Technology

Skia

Impeller

Shader compilation

JIT (runtime)

AOT (build time)

Predictability

Possible sudden lags

Stable frame time

Graphics API

OpenGL

Metal / Vulkan

Tessellation

CPU

GPU

Impeller solves animation issues fundamentally: it compiles all shaders ahead of time. It also uses more efficient tessellation algorithms, turning curves into triangles that modern GPUs handle extremely efficiently. This enables stable 120 FPS even in graphically intensive scenes.

What are “GPU signals”?

These are low-level calls to graphics APIs (Metal on iOS or Vulkan on Android). The engine sends arrays of vertices (triangles) and textures to the GPU. The GPU executes these instructions and fills a memory region called the frame buffer.

GPU and Display

Once rasterization is complete:

  1. GPU writes the final image into memory.

  2. The display hardware reads this buffer (typically 60 or 120 times per second).

  3. Physical pixels change state via electrical signals (LCD or OLED), producing the image you see.

Final path from widgets to pixels

Let’s summarize the journey “from code to pixel”:

  1. Vsync: The OS signals that the screen is ready for a new frame.

  2. Ticker Wake-up: The animation metronome wakes up and calculates elapsed time.

  3. Math calculation: AnimationController computes the current value via Simulation.

  4. notifyListeners: Listeners are notified, including your setState.

  5. Marking dirty: Widgets are marked dirty and added to the rebuild queue.

  6. Build Phase: New widget configurations are created.

  7. Layout Phase: Sizes and positions are calculated.

  8. Compositing Bits: Layer requirements are determined.

  9. Paint Phase: Drawing commands are recorded.

  10. Compositing Phase: Layers are assembled into a Scene.

  11. Engine Hand-off: Scene is passed to the native engine.

  12. Rasterization: Commands are converted into GPU instructions.

  13. GPU & Display: The frame buffer is updated and pixels change.


This concludes our deep dive.

Now, the next time you write controller.forward(); you’ll understand the massive system that springs into action just to move a single pixel on your screen.