Expo Media Engine 1.0.0-alpha-4

Hardware-accelerated video composition and editing for Expo — one config for preview and export.

iOS uses AVFoundation; Android uses MediaCodec and OpenGL ES 2.0. Open source (MIT). No proprietary SDKs or per-minute APIs.

📦

Installation

npm install @projectyoked/expo-media-engine
npx expo prebuild

This line is published as npm latest, so you do not need the @alpha dist-tag. The legacy 0.1.x API is npm install @projectyoked/expo-media-engine@0.1.3 — see stable docs.

Expo Go is not supported. This module includes native Swift and Kotlin code that must be compiled into your app binary. You need a development build — run eas build --profile development or npx expo run:ios / npx expo run:android locally.

The expo prebuild step generates the native ios/ and android/ directories and wires up the module. If you've already run prebuild before, run it again after installing this package so the native projects are updated.

Requirements

RequirementMinimum
Expo SDK49+
expo-modules-core1.0.0+
iOS13.4+
Android API21+
React Native0.64+
📱

Platform notes

Things that are easy to trip over once you leave the happy path.

iOS export behavior

One heavy job at a time. composeCompositeVideo, exportComposition, stitchVideos, and compressVideo share a queue. Don’t start a second export until the first promise settles.

Overlays = two passes. If your composition includes text or image tracks, iOS encodes the base video first (video, audio, filters), then runs a second export to burn in Core Animation overlays. That avoids a Simulator crash that showed up when both the CIFilter compositor and the Core Animation video tool ran in a single session.

Trims and transitions: timeline duration is what you want on the final edit; if your clipStart/clipEnd window is longer or shorter, the engine time-scales it to fit. Transition clips need a real overlap on the timeline — see Transitions.

Simulator vs device

The Simulator is great for UI work. For long exports or torture-testing the integration suite, a physical device tends to be more predictable — especially around video memory.

Metro and monorepos

If you install this package with file: or from a workspace, watch out for two copies of react-native. The preview view is registered under its own native module name; a split dependency tree can produce errors like View config getter callback returning nothing.

Use withMediaEngineMonorepoResolver from the package’s metro-preset.js so your app and the library resolve the same React Native. The repo’s example/metro.config.js is the reference.

🚀

Quick start

The example below exports a two-clip video with a crossfade transition and background music. A high-quality 9:16 vertical output (1080×1920) is typical for mobile share videos.

import MediaEngine from '@projectyoked/expo-media-engine';

const outputUri = await MediaEngine.composeCompositeVideo({
  outputUri: 'file:///path/to/output.mp4',
  width:     1080,
  height:    1920,
  frameRate: 30,
  quality:   'high',
  tracks: [
    {
      type: 'video',
      clips: [
        {
          uri:       'file:///clip-a.mp4',
          startTime: 0,
          duration:  5,
          filter:    'warm',
        },
        {
          uri:                'file:///clip-b.mp4',
          startTime:          4,
          duration:           5,
          transition:         'crossfade',
          transitionDuration: 1,
        },
      ],
    },
    {
      type: 'audio',
      clips: [{ uri: 'file:///music.mp3', startTime: 0, duration: 9, volume: 0.8 }],
    },
  ],
});

File paths: All URIs must be file:// paths on the device. Use expo-file-system's FileSystem.cacheDirectory or FileSystem.documentDirectory to build output paths. Example: `${FileSystem.cacheDirectory}output_${Date.now()}.mp4`

🎬

composeCompositeVideo

The primary export function. Renders a CompositionConfig to a video file on disk and resolves with outputUri when done.

iOS specifics are summarized under Platform notes (export queue, two-pass overlays, trims, transitions). Android runs the OpenGL composer in one go unless the encoder falls back.

composeCompositeVideo(config: CompositionConfig)→ Promise<string>

The engine composites all tracks in order (bottom-to-top) and writes a single H.264 MP4. On both platforms the engine detects when re-encoding can be skipped entirely — for a single-clip composition with no transforms, filters, or overlays, the source bytes are passed through directly (zero quality loss, much faster).

Choosing quality vs bitrate

Use quality for convenience. Use videoBitrate when you need a specific file size target. At 1080p 30fps, typical values:

  • 'low' (1 Mbps) — acceptable for messaging/previews, visible compression artifacts at motion
  • 'medium' (4 Mbps) — good general purpose, indistinguishable from source for most content
  • 'high' (10 Mbps) — near-lossless; use when the output will be re-encoded downstream (e.g. platform upload)

Output dimensions

If you omit width/height, the engine reads them from the first video clip's source metadata. Explicitly set them when mixing clips of different resolutions or when you want a specific aspect ratio (e.g. 1080×1920 for 9:16 Reels/TikTok).

Top-level config fields

FieldTypeDescription
outputUristringDestination file path (file://…). Required.
widthnumberOutput width in pixels. Inferred from primary video source if omitted.
heightnumberOutput height in pixels. Inferred from primary video source if omitted.
frameRatenumberOutput frame rate. Inferred from source if omitted.
quality'low' | 'medium' | 'high'Bitrate shorthand: low = 1 Mbps, medium = 4 Mbps, high = 10 Mbps.
videoBitratenumberExplicit video bitrate in bps. Overrides quality.
audioBitratenumberAudio bitrate in bps. Default: 128000.
videoProfile'baseline' | 'main' | 'high'H.264 profile. Default: baseline.
enablePassthroughbooleanAllow zero-copy passthrough when no re-encoding is needed. Default: true.
tracksCompositeTrack[]Ordered array of tracks, bottom-most first. Required.
🗂

Tracks & clips

A track has a type and an ordered array of clips. The tracks array is rendered bottom-to-top — the first entry sits at the back, the last entry sits in front.

Track typeDescription
'video'Video source clips with optional filters and transitions.
'audio'Audio-only source clips (mp3, m4a, wav) with volume and fade controls.
'text'Timed text or emoji overlays rendered at the given position.
'image'Timed image overlays with position, scale, rotation, and opacity.

How audio works across tracks

The engine mixes audio from all tracks — both dedicated 'audio' tracks and the audio streams embedded in 'video' clips. Use volume: 0 on a video clip to silence its native audio while keeping the video, then add an independent 'audio' track for music.

Multiple video tracks (picture-in-picture)

You can stack more than one 'video' track. Each additional video track is composited on top of the previous ones using its x, y, scale, and opacity values — useful for picture-in-picture layouts.

tracks: [
  // Background: full-frame workout footage
  { type: 'video', clips: [{ uri: mainClip, startTime: 0, duration: 10 }] },

  // Foreground: small reaction cam in the corner
  {
    type: 'video',
    clips: [{
      uri:       camClip,
      startTime: 0,
      duration:  10,
      x:         0.78,   // normalized 0–1 from left
      y:         0.12,   // normalized 0–1 from top
      scale:     0.28,   // 28% of canvas size
    }],
  },
]
✂️

Clip properties

All clip types share these base fields. Type-specific fields (text, image) are covered in their own sections.

Coordinate system

Position (x, y) uses a normalized 0–1 space relative to the output canvas. (0, 0) is the top-left corner, (0.5, 0.5) is the center, (1, 1) is the bottom-right. The value represents the center point of the clip.

Source trimming

Use clipStart and clipEnd to read from a slice of the source file. Values are seconds on the source timeline, not the composition timeline.

duration is how long that clip should last on your edit. If the trimmed source segment doesn’t match (for example you marked 12s of source but set duration: 10), the engine adjusts playback speed so the clip still fills those 10 seconds — same idea as an explicit speed field.

// Use seconds 5–15 of a 60-second source; plays as 10s on the timeline
{ uri: 'file:///long-workout.mp4', startTime: 0, duration: 10, clipStart: 5, clipEnd: 15 }

Speed

speed scales how fast the source plays relative to real time. A 5-second clip at speed: 2 plays out in 2.5 seconds on the timeline. Use values below 1 for slow-motion, above 1 for fast-forward. The duration field should reflect the output duration after the speed change.

// 10-second source clip played back as 5-second fast-forward
{ uri: 'file:///clip.mp4', startTime: 0, duration: 5, speed: 2.0 }

Resize modes

  • 'cover' — scales the clip to fill the frame, cropping excess edges. Best for full-frame video.
  • 'contain' — scales to fit entirely within the frame, adding letterbox/pillarbox bars.
  • 'stretch' — distorts to exactly fill the frame. Rarely used.
PropertyTypeDescription
uristringFile URI (file://…). Not required for text clips.
startTimenumberWhen the clip appears on the timeline (seconds).
durationnumberHow long the clip plays (seconds).
xnumberNormalized horizontal center 0–1. Default: centered.
ynumberNormalized vertical center 0–1. Default: centered.
scalenumberSize multiplier. Default: 1.0.
rotationnumberDegrees clockwise. Default: 0.
opacitynumberTransparency 0–1. Default: 1.0.
resizeModestring'cover' | 'contain' | 'stretch'. Default: 'cover'.
clipStartnumberTrim start within source (seconds). Default: 0.
clipEndnumberTrim end within source. -1 = full source. Default: -1.
speednumberPlayback speed multiplier. 0.5 = slow-mo, 2.0 = fast-forward. Default: 1.0.
filterFilterTypeColor filter applied to this clip. See Filters.
filterIntensitynumberFilter strength 0–1. Default: 1.0.
transitionTransitionTypeTransition at this clip's boundary. See Transitions.
transitionDurationnumberTransition window in seconds. Clips must overlap by at least this amount.
volumenumberAudio volume 0–1. Default: 1.0.
fadeInDurationnumberAudio fade in (seconds). Default: 0.
fadeOutDurationnumberAudio fade out (seconds). Default: 0.
volumeEnvelopeVolumeEnvelopeKeyframe-based volume automation. See Audio & volume.
animationsClipAnimationsKeyframe arrays for x, y, scale, rotation, opacity over time.
🎨

Filters

Set filter on any video or image clip. Use filterIntensity (0–1) to blend from no effect (0) to full effect (1). Filters are applied in hardware on both platforms — iOS via CIFilter, Android via OpenGL ES fragment shaders — so they add negligible performance overhead.

// Subtle warm grade at 60% intensity
{ uri: 'file:///clip.mp4', startTime: 0, duration: 5, filter: 'warm', filterIntensity: 0.6 }
ValueEffectiOSAndroid
'grayscale'Full desaturation
'sepia'Warm brown tone
'vignette'Dark edge falloff
'invert'Color inversion
'brightness'Luminance boost or reduction
'contrast'Contrast adjustment
'saturation'Color intensity
'warm'Red/yellow shift
'cool'Blue shift
🔀

Transitions

Set transition on the outgoing clip (the one that's ending). The outgoing and incoming clips must overlap on the timeline — the overlap window is the transition duration.

Timeline overlap rule

The key insight: to create a 1-second crossfade between clip A (5s) and clip B (5s), clip B's startTime must begin 1 second before clip A ends. The engine renders both clips during the overlap and blends them.

// Clip A: 0–5s. Clip B starts at 4s so there is a 1s overlap.
// Set transition on clip A (the outgoing clip).
clips: [
  {
    uri:                'file:///clip-a.mp4',
    startTime:          0,
    duration:           5,
    transition:         'crossfade',
    transitionDuration: 1,        // 1-second blend window
  },
  {
    uri:       'file:///clip-b.mp4',
    startTime: 4,                 // starts 1s before clip A ends
    duration:  5,
  },
]

Overlap is required. The incoming clip’s startTime should fall before the outgoing clip ends, by at least transitionDuration. If the gap is too small, you’ll get a hard cut or odd blends — design the overlap first, then tune transitionDuration.

ValueEffectiOSAndroid
'crossfade'Opacity blend between clips
'fade'Fade to black, then fade in
'slide-left'Incoming slides in from the right
'slide-right'Incoming slides in from the left
'slide-up'Incoming slides in from the bottom
'slide-down'Incoming slides in from the top
'zoom-in'Outgoing zooms out while incoming zooms to normal
'zoom-out'Outgoing shrinks, incoming enters at full size
✍️

Text styling

Text clips support a textStyle object. All fields are optional. The clip's x/y properties control the center position of the text in normalized 0–1 coordinates. Emoji are fully supported — pass them directly in the text field.

// Bold white text with a dark stroke, centered near the top
{
  text:      '🔥 Personal Record!',
  startTime: 1.5,
  duration:  3,
  x:         0.5,     // horizontally centered
  y:         0.1,     // 10% from top
  textStyle: {
    fontSize:     58,
    fontWeight:   'bold',
    color:        '#FFFFFF',
    strokeColor:  '#000000',
    strokeWidth:  2,
    shadowColor:  '#000000',
    shadowRadius: 8,
  },
}

// Text on a colored pill background
{
  text:      'Day 1',
  startTime: 0,
  duration:  5,
  x: 0.5, y: 0.88,
  textStyle: {
    fontSize:         36,
    color:            '#FFFFFF',
    backgroundColor:  '#E11D48',
    backgroundPadding: 12,
  },
}
PropertyTypeDescription
colorstringHex color. Default: #FFFFFF.
fontSizenumberPoints. Default: 40.
fontWeight'normal' | 'bold'Default: 'normal'.
backgroundColorstringPill background color. Hidden if omitted.
backgroundPaddingnumberPadding inside background pill (px). Default: 8.
shadowColorstringDrop shadow color.
shadowRadiusnumberShadow blur radius. Default: 0.
shadowOffsetXnumberShadow horizontal offset. Default: 0.
shadowOffsetYnumberShadow vertical offset. Default: 0.
strokeColorstringText outline color.
strokeWidthnumberText outline width. Default: 0.
🔊

Audio & volume

Audio is mixed from all audio and video tracks. Priority order: volumeEnvelope (if provided) → fade in/out → flat volume.

  • volume — flat multiplier 0–1 applied to the entire clip
  • fadeInDuration / fadeOutDuration — linear ramp at clip start/end
  • volumeEnvelope.keyframes — arbitrary time-based automation; overrides fades when provided

Common patterns

// Pattern 1: Silence video audio, add background music
tracks: [
  {
    type: 'video',
    clips: [{ uri: clipUri, startTime: 0, duration: 30, volume: 0 }], // mute video
  },
  {
    type: 'audio',
    clips: [{
      uri:             musicUri,
      startTime:       0,
      duration:        30,
      volume:          0.9,
      fadeOutDuration: 2,   // fade out last 2 seconds
    }],
  },
]

// Pattern 2: Duck music when video has important audio
tracks: [
  { type: 'video', clips: [{ uri: clipUri, startTime: 0, duration: 30, volume: 1.0 }] },
  {
    type: 'audio',
    clips: [{
      uri: musicUri, startTime: 0, duration: 30,
      volumeEnvelope: {
        keyframes: [
          { time: 0,  volume: 0.8 },   // music plays normally
          { time: 5,  volume: 0.2 },   // duck down under speech
          { time: 15, volume: 0.8 },   // back up
        ],
      },
    }],
  },
]

Volume keyframe automation

Keyframe time values are in seconds relative to the composition timeline (not the clip's local time). The engine linearly interpolates volume between keyframes.

volumeEnvelope: {
  keyframes: [
    { time: 0,   volume: 0 },   // start silent
    { time: 1,   volume: 1 },   // ramp up over 1 second
    { time: 8,   volume: 1 },   // hold at full volume
    { time: 9.5, volume: 0 },   // ramp out
  ],
},

Preview engine overview

The preview system is a two-layer architecture designed for a CapCut-style editor. The native layer handles video accurately; the JS layer handles interactive overlays.

  • Native video layer (MediaEnginePreview) — renders video and audio tracks at ~30 fps using the same hardware pipeline as the export engine. Filters, transitions, speed changes, and opacity all match the export exactly.
  • JS overlay layer (useCompositionOverlays) — returns active text and image clips at the current time with all transforms resolved. Feed this to Skia, Reanimated, or standard RN views to render interactive draggable elements on top of the video.
  • Single source of truth — the same CompositionConfig drives both layers and the final export. No coordinate conversion, no drift.

Why this split? Rendering text and images natively inside the video pipeline makes them non-interactive — you can't tap or drag a CATextLayer. By keeping them in JS, you get full gesture support (pan, pinch, rotate) while still knowing exactly where they'll land in the export because both systems use the same normalized 0–1 coordinate space.

import { useRef, useState } from 'react';
import { View, StyleSheet } from 'react-native';
import {
  MediaEnginePreview,
  useCompositionOverlays,
} from '@projectyoked/expo-media-engine';

export function CompositionEditor({ config }) {
  const previewRef  = useRef(null);
  const [time, setTime]         = useState(0);
  const [playing, setPlaying]   = useState(false);

  // Active text/image clips with resolved x/y/scale/rotation/opacity
  const overlays = useCompositionOverlays(config, time);

  return (
    <View style={StyleSheet.absoluteFill}>
      {/* Native video layer: filters, transitions, speed */}
      <MediaEnginePreview
        ref={previewRef}
        config={config}
        isPlaying={playing}
        onTimeUpdate={e => setTime(e.nativeEvent.currentTime)}
        onLoad={e => console.log('Duration:', e.nativeEvent.duration)}
        style={StyleSheet.absoluteFill}
      />

      {/* JS overlay layer: Skia / Reanimated drag-and-resize */}
      {overlays.map(o => (
        <InteractiveOverlay key={o.id} overlay={o} />
      ))}
    </View>
  );
}
📺

MediaEnginePreview

A native Expo view that renders video and audio at ~30 fps using the export-accurate pipeline. Import as a named export — not the default.

import { MediaEnginePreview } from '@projectyoked/expo-media-engine';

Blank preview in a monorepo? See Platform notes — Metro must resolve one shared react-native for the app and this package.

Play / pause / scrub example

import { useState, useRef } from 'react';
import { View, StyleSheet, Pressable, Text } from 'react-native';
import { MediaEnginePreview } from '@projectyoked/expo-media-engine';

export function VideoPlayer({ config }) {
  const previewRef            = useRef(null);
  const [playing, setPlaying] = useState(false);
  const [time, setTime]       = useState(0);
  const [duration, setDur]    = useState(0);

  return (
    <View style={styles.container}>
      <MediaEnginePreview
        ref={previewRef}
        config={config}
        isPlaying={playing}
        onLoad={e  => setDur(e.nativeEvent.duration)}
        onTimeUpdate={e => setTime(e.nativeEvent.currentTime)}
        onPlaybackEnded={() => setPlaying(false)}
        style={StyleSheet.absoluteFill}
      />

      {/* Scrub while paused: update seekTo on slider change */}
      <Slider
        value={time}
        maximumValue={duration}
        onValueChange={val => {
          if (!playing) previewRef.current?.seekTo(val);
        }}
      />

      <Pressable onPress={() => setPlaying(p => !p)}>
        <Text>{playing ? 'Pause' : 'Play'}</Text>
      </Pressable>
    </View>
  );
}

Scrubbing while paused: set isPlaying={false} then call previewRef.current.seekTo(seconds) in response to a slider gesture. Use the currentTime prop when you need declarative control (e.g. a driven animation); use the seekTo ref for imperative seeks.

Props

PropTypeDescription
configCompositionConfigThe composition to preview. Required.
isPlayingbooleanPlay / pause state. Default: false.
mutedbooleanMute audio. Default: false.
currentTimenumberSeek position in seconds. Update while paused to scrub the timeline.
styleViewStyleStandard React Native style prop.

Events

EventPayloadDescription
onLoad{ duration: number }Fired once the engine is ready.
onTimeUpdate{ currentTime: number }Fires at ~30 fps during playback.
onPlaybackEnded{}Fired when playback reaches the end.
onError{ message: string }Fired on fatal engine errors.

Ref

previewRef.current.seekTo(seconds) — imperative seek. Alternative to setting the currentTime prop.

🧩

useCompositionOverlays

Returns all text and image clips active at currentTime, with transforms and keyframe animations resolved. Memoized — only recalculates when config or currentTime changes. The id is stable across re-renders, making it safe to use as a React key.

import { useCompositionOverlays } from '@projectyoked/expo-media-engine';

const overlays = useCompositionOverlays(config, currentTime);
// returns: ActiveOverlay[]

Mapping to Skia / RN components

Each ActiveOverlay gives you everything you need to position and render it. The x/y values are 0–1 normalized — multiply by your canvas dimensions to get screen pixels. localTime is useful for driving per-clip animations (e.g. a typewriter entrance effect).

import { Canvas, Text as SkText, Image, useImage } from '@shopify/react-native-skia';
import { GestureDetector, Gesture } from 'react-native-gesture-handler';

// Minimal example — render each overlay as an absolutely-positioned view
function OverlayLayer({ overlays, canvasWidth, canvasHeight }) {
  return (
    <View style={StyleSheet.absoluteFill} pointerEvents="box-none">
      {overlays.map(o => {
        const left = o.x * canvasWidth;
        const top  = o.y * canvasHeight;

        return (
          <DraggableOverlay
            key={o.id}
            style={{
              position: 'absolute',
              left,
              top,
              transform: [
                { translateX: -left },  // anchor to center
                { translateY: -top },
                { scale:    o.scale },
                { rotate:   `${o.rotation}deg` },
                { translateX: left },
                { translateY: top },
              ],
              opacity: o.opacity,
            }}
          >
            {o.type === 'text' ? (
              <Text style={{ fontSize: o.fontSize, color: o.color, fontWeight: o.fontWeight }}>
                {o.text}
              </Text>
            ) : (
              <Image source={{ uri: o.uri }} style={{ width: 80, height: 80 }} />
            )}
          </DraggableOverlay>
        );
      })}
    </View>
  );
}

Persisting gesture changes: when the user drags/resizes an overlay, update the corresponding clip's x, y, scale, or rotation in your config state. Because useCompositionOverlays reads from config, the overlay snaps to the new position on the next render — and the same values are used by the export engine automatically.

ActiveOverlay fields

FieldTypeDescription
idstringStable key: "track-{n}-clip-{n}". Use as React key.
type'text' | 'image'Clip type.
x / ynumberResolved center position 0–1. Multiply by canvas size to get screen pixels.
scalenumberResolved scale multiplier.
rotationnumberResolved rotation in degrees.
opacitynumberResolved opacity 0–1.
localTimenumberSeconds since this clip's startTime. Drive per-clip entrance animations with this.
textstringText content (text clips only).
uristringImage file URI (image clips only).
color, fontSize, fontWeight, shadowColor, strokeColor, …variousAll text style fields resolved from textStyle.
🔗

stitchVideos

Concatenate videos end-to-end in the order provided. Returns the output URI on success.

stitchVideos(uris: string[], outputUri: string)→ Promise<string>
await MediaEngine.stitchVideos(
  ['file:///clip1.mp4', 'file:///clip2.mp4', 'file:///clip3.mp4'],
  'file:///output.mp4'
);

Fast path vs transcoding: on Android, the engine first tries a fast mp4parser byte-copy (no re-encoding, instant, lossless). If the clips have incompatible codecs, resolutions, or container metadata, it falls back to a full transcode pass. On iOS, concatenation always uses AVMutableComposition with passthrough export. Expect the fast path when all clips come from the same camera and share identical encoding parameters.

Use stitchVideos for simple joins with no visual edits. For anything with filters, transitions, text, or mixed sources, use composeCompositeVideo instead.

📉

compressVideo

Re-encode a video at a lower bitrate or resolution. Useful for reducing file size before upload or sharing.

compressVideo(config: CompressVideoConfig)→ Promise<string>
// Compress for upload — medium quality, capped at 720p
await MediaEngine.compressVideo({
  inputUri:  'file:///input.mp4',
  outputUri: 'file:///compressed.mp4',
  quality:   'medium',
  maxWidth:  1280,
  maxHeight: 720,
});

H.264 vs H.265

H.265 (HEVC) produces roughly 40% smaller files at equivalent quality compared to H.264. The trade-off is slower encoding (about 2× on Android) and slightly wider player compatibility requirements. H.265 is available on Android via the codec: 'h265' option. On iOS the export always uses H.264 regardless of this field.

As a rule of thumb: use 'h264' for maximum compatibility, 'h265' when file size is critical (e.g. low-bandwidth upload).

FieldTypeDescription
inputUristringSource file. Required.
outputUristringDestination file. Required.
qualitystring'low' (1 Mbps) | 'medium' (4 Mbps) | 'high' (8 Mbps). Ignored if bitrate is set.
bitratenumberExplicit video bitrate in bps.
audioBitratenumberAudio bitrate in bps. Default: 128000.
width / heightnumberExplicit output dimensions.
maxWidth / maxHeightnumberConstrain dimensions proportionally.
frameRatenumberOutput frame rate.
codecstring'h264' | 'h265' (Android only). Default: 'h264'.
〰️

getWaveform

Decodes an audio file and returns a normalized array of RMS amplitude values (0–1) representing energy across the file. Use it to render a scrollable waveform timeline or a waveform thumbnail.

getWaveform(uri: string, samples?: number)→ Promise<number[]>
const amplitudes = await MediaEngine.getWaveform('file:///audio.mp3', 200);
// number[] of length 200, each value 0.0–1.0

Choosing a sample count

  • 200–400 — scrollable timeline waveform for a typical mobile screen width
  • 50–80 — compact waveform thumbnail or inline clip preview
  • 1000+ — high-resolution waveform (e.g. desktop-scale editor). Higher counts take proportionally longer to generate.

Works on both audio files (mp3, m4a, wav) and video files — the engine extracts the audio track automatically. Pass the same URI you'd give to a video composition clip.

🎵

extractAudio

Extracts the audio track from a video file and writes it to an .m4a (AAC in MPEG-4 container) file. The audio is remuxed without re-encoding — fast and lossless on both platforms.

extractAudio(videoUri: string, outputUri: string)→ Promise<string>
const audioUri = await MediaEngine.extractAudio(
  'file:///workout.mp4',
  `${FileSystem.cacheDirectory}audio_${Date.now()}.m4a`
);

Common use cases:

  • Extract audio before passing it to getWaveform for a timeline visualization
  • Save a clip's audio for later mixing in a separate audio track
  • Provide a share sheet audio-only export

Output path: the output URI must end in .m4a and the directory must exist. If a file already exists at that path it will be overwritten.

🔍

isAvailable

Returns true if the native module is linked and callable. Returns false when running in Expo Go, a web browser, or any environment where the native binary hasn't been compiled in.

isAvailable()→ boolean
import MediaEngine from '@projectyoked/expo-media-engine';

// Guard at the top of a screen component
if (!MediaEngine.isAvailable()) {
  // Show a fallback UI or return early
  return <Text>Video editing is not available in this environment.</Text>;
}

// Or as a one-time startup check
useEffect(() => {
  if (!MediaEngine.isAvailable()) {
    Alert.alert('Build required', 'Run expo prebuild and rebuild the app.');
  }
}, []);

In production apps you'll almost never need this guard since you control the build environment. It's most useful during development when toggling between Expo Go and a dev client.

🟦

TypeScript types

All types ship with the npm package as src/index.d.ts. The block below is the full definition file (same as on GitHub).

Loading types…