Skip to content

PoseNet — Camera Feed Demo

Real-time, in-browser pose estimation using TensorFlow.js PoseNet. The browser grabs your camera feed, runs PoseNet on each frame, and draws keypoints and skeleton overlays without sending any pixels to a server.

JavaScript TensorFlow.js PoseNet Webcam

What this demo does

The PoseNet demo uses the browser's camera feed as input into TensorFlow.js' PoseNet model. For each frame, PoseNet returns a set of keypoints for major joints (shoulders, hips, knees, etc.). Those keypoints are drawn on top of the video, giving a live view of how the model perceives your posture.

Implementation notes

  • Camera feed captured via getUserMedia.
  • Frames piped into PoseNet using TensorFlow.js, all on-device.
  • Skeleton and keypoints drawn to a canvas overlay.
  • Performance tuned by adjusting input resolution and stride.

Why it exists

This project sits at the intersection of motion and engineering. It's a playground for thinking about posture, movement quality, and how real-time inference might support training, coaching, or interactive sport experiences in the browser.