Transcoding API

The Qencode API is structured around REST. All API methods described below accept POST as application/x-www-form-urlencoded and returns JSON-encoded responses. In case of FAILURE during method call response contains 'error' param set to value greater 0 (error code). In case of SUCCESS response contains 'error' param set to 0.

note
Note: A quick overview on working with our Transcoding API
MethodPOST
Paramsapplication/x-www-form-urlencoded
ReturnsJSON
SuccessError Code = 0
FailureError Code = (A value from our "List of Error Codes and Values" below)

Getting Access Token

POST
/v1/access_token

Qencode requires an api_key to generate a session-based token to authenticate requests and launch tasks. You can view and manage the API keys associated with your Projects inside of your Qencode Account.

To get started, use the /v1/access_token method to acquire the session-based token, which you will use to authenticate all other requests through the Qencode API.

warning
Caution
To build a secure solution we strongly recommend that you DO NOT call this method directly from any client application as you will expose your api key publicly. We recommend you first obtain a session token from your server and then pass to the client app.
Arguments

For transcoding, an API key is assigned to each Project created in your Qencode account. After logging into your account, you can manage your API keys on the Projects page, as well as track the usage of each Project on the Statistics page.

Returns

After API key authentication is complete, you will receive this session-based token, which is used to authenticate all other requests through the Qencode API.

Request Example

Replace the value below with your API key. An API key can be found for each Project in your account on qencode.com.

curl https://api.qencode.com/v1/access_token \  
   -d api_key=your_api_key
API_KEY =  = "your_api_key";
client = qencode.client(API_KEY);
$apiKey = 'your_api_key';
$q = new QencodeApiClient($apiKey);
String apiKey = "your_api_key";
QencodeApiClient client = new QencodeApiClient(apiKey);
const apiKey = "your_api_key";
const qencodeApiClient = new QencodeApiClient(apiKey);
var apiKey = "your_api_key";
var q = new QencodeApiClient(apiKey);
Response Example

Token returned should be passed to /v1/create_task API method

{
 'token' : '1357924680',
 'expire' : '2020-12-31 23:59:59'
}

Creating a Task

POST
/v1/create_task

Once you have received your token, use the /v1/create_task method to receive the task_token, which is used to define your transcoding parameters in the next step. You will also receive an upload_url which can be used for direct uploads via the /v1/upload_file method.

Arguments

This session token is used to get the task_token, which is required to start transcoding jobs.

Returns

The transcoding task token (also referred to as Job ID) uniquely identifies transcoding job in the system. You can use this value with ID to track down jobs in the UI or as an argument in the /v1/status method to get the status of jobs.

When uploading videos with the tus.io protocol, upload_url is the endpoint used to upload video files directly to the Qencode servers.
Request Example
curl https://api.qencode.com/v1/create_task \  
   -d token=76682314a86ed377730873394f8172f2
task = client.create_task()
$task = $q->createTask();
TranscodingTask task = client.CreateTask();
let task = qencodeApiClient.CreateTask();
var task = q.CreateTask();
Response Example
{
 'error': 0,
 'upload_url': 'https://storage.qencode.com/v1/upload_file',
 'task_token': '471272a512d76c22665db9dcee893409'
}

Starting a Task

POST
/v1/start_encode2

Starts a transcoding job that contains all the parameters needed to transcode your videos into the formats, codecs, resolutions you need, along with the list of fully customizable settings. Pass the task_token with the /v1/start_encode2 method, along with the query JSON object containg the request with your transcoding parameters. You can also include the payload parameter to send additional data with the task that can be retireved in the future. The source parameter within the query JSON object can also be used for uploads by specifying 'tus:<file_uuid>'. You can also use the stitch parameter to stitch multiple videos together into one See Tutorial for Stitching Videos.

Arguments

The token created for this task. See /v1/create_task method.

The query JSON object contains all of the input attributes used for defining the transcoding settings for the task.

Any string up to 1000 characters long that allows you to pass additional data or JSON objects (internal IDs perhaps) which is sent back later to your server with HTTP callback.

Returns

Right after task is launched, the endpoint https://api.qencode.com/v1/status is always available to receive the basic set of task status attributes. To get extended status attributes (like completion percent) please refer to /v1/status method.

Input Objects Structure
Attributes

The query is the main part of your API Request since it contains the vast majority of the features and customizable parameters used to define your output settings.

Attributes

The URI is responsible for defining a single video's URI, whether it's a video URL or ID returned with upload_file method.

For direct uploads source uri will be in 'tus:<file_uuid>' format. See Direct video upload for more information.

Note: Both the source and the stitch parameters cannot be used for the same task. The source parameter is used for when you only have one source (input) video. The stitch parameter is used if you want to combine several source (input) videos together to form a single output.

Use the stitch parameter in order to combine several input videos into a single one. Stitch should be a json-list of URLs or video-objects. When using object form you can specify start_time and duration attributes.

Either specify the source or the stitch parameter: use source in case you have single file input and stitch in case you need to stitch several files together.

Each of the objects in this list is used to define all the transcoding parameters for each output format. See the format object attributes description below for more details.
Attributes

Output media format. Currently supported values are mp4, webm, advanced_hls, advanced_dash, webm_dash, repack, mp3, hls_audio, flac, gif, thumbnail, thumbnails, metadata, speech_to_text.

See Supported formats section for more details.

The thumbnail & thumbnails values are used for creation of thumbnail images. See the Create thumbnails section for more details.

Repack output type is used for transmuxing, when no transcoding is done and just media container is changed in output. See the Transmuxing tutorial for more details.

In order to create a smaller video clip from the source video, start_time is used along with duration to define the point of the video to be used for the output video. Specifies the start time (in seconds) in input video to begin transcoding from.

Specifies duration (in seconds) of the output audio or video clip to be transcoded.

Describes output endpoint, path, credentials and permissions for the destination of the output files created a result of the API request. You can save to multiple destinations, by putting all of your destination objects into array. Qencode offers a wide range of options for destinations, some of which are covered in our Media Storage Tutorials section.

note
Note
If you don't specify destination, your video will be available to download from our servers during 24 hours.
Attributes

Specifies the output url. E.g. s3://example.com/bucket/video.mp4.

For 'mp4', 'webm', 'mp3' and 'thumbnail' outputs it should contain path and name of the output file.

For HLS or MPEG-DASH destination url should be path to a folder in your bucket.

For 'thumbnails' output it should be path to a folder where thumbnails and .vtt file are saved.

For 'speech_to_text' output it should be path to a folder where trascript and subtitles files are saved.

Supported storage prefixes are:

  • s3:// - for any S3-compatible storage (Qencode, AWS, GCS, DigitalOcean, etc.)
  • b2:// - for Backblaze B2
  • azblob:// - for Azure Blob Storage
  • ftp:// or ftps:// - for any FTP server
  • sftp:// - for any FTP over SSH server
Your access key for S3 bucket, or username for FTP server, etc.
Your secret key for S3 bucket, or password for FTP server, etc.

For S3 only. Specifies object access permissions. For AWS possible values are: 'private', 'public-read', 'authenticated-read', 'bucket-owner-read' and others described in Access Control List Overview. Default value is 'private'.

Specify 'public-read' value in order to make output video publicly accessible.

Only for AWS S3. Specifies storage class for the output. You can specify REDUCED_REDUNDANCY value in order to lower your storage costs for noncritical, reproducible data. See Reduced redundancy storage description.

Output video or image frame size in pixels ("width"x"height"). Defaults to original frame size.

For HLS or DASH output specify this parameter on stream object level.

Output video frame (or thumbnail) width in pixels. If specified without "height" attribute, frame height is calculated proportionally to original height.

For HLS or DASH output specify this parameter on stream object level.

Output video frame (or thumbnail) height in pixels. If specified without "width" attribute, frame width is calculated proportionally to original width.

For HLS or DASH output specify this parameter on stream object level.

You can specify resolution instead of providing width or height value for a video output. In this case system dynamically decides if this refers to video frame width or height depending on the video orientaion.

For horizontal videos resolution value refers to video height.

For vertical videos resolution value refers to video width.

size, width and height params are ignored in case resolution is specified.

Rotate video through specified degrees value. Possible values are 90, 180, 270.

For HLS or DASH output specify this parameter on stream object level.

Examples: "4:3", "16:9", "1.33", "1.77". Defaults to input video aspect ratio.

For HLS or DASH output specify this parameter on stream object level.

Specify 'scale' in case you want to transform frame to fit output size. Specify 'crop' in case you want to preserve input video aspect ratio. In case input and output aspect ratio do not match and 'crop' mode is enabled, output video is cropped or black bars are added, depending on the output dimensions. Possible values: crop, scale. Defaults to 'scale'.

Defaults to original frame rate.

If you don't specify framerate for a stitch job, output framerate will be equal to framerate of the first video in a batch.

For HLS or DASH output specify this parameter on stream object level.

Keyframe interval (in frames). Defaults to 90.

For HLS or DASH output specify this parameter on stream object level.

You can also specify keyframe interval in seconds, just by adding "s" character to the value, e.g. "3s"

Also known as "Constant rate factor" (CRF). Use this parameter to produce optimized videos with variable bitrate. For H.264 the range is 0-51: where 0 is lossless and 51 is worst possible. A lower value is a higher quality and a subjectively sane range is 18-28. Consider 18 to be visually lossless or nearly so: it should look the same or nearly the same as the input but it isn't technically lossless.

For HLS or DASH output specify this parameter on stream object level.

Output video stream bitrate in kbps

Use two pass mode in case you want to achieve exact bitrate values.

For HLS or DASH output specify this parameter on stream object level.

Use two-pass encoding to achieve exact bitrate value for output.

Please note, two-pass encoding is almost twice slower than one-pass coding. The price is also increased twice.

For HLS or DASH output specify this parameter on stream object level.

Possible values are yuv420p, yuv422p, yuvj420p, yuvj422p. Defaults to yuv420p.

For HLS or DASH output specify this parameter on stream object level.

Defaults to libx264. Possible values are: libx264, libx265, libvpx, libvpx-vp9, lcevc_h264, lcevc_hevc.

lcevc codecs are only supported with encoder_version set to 2.

For HLS or DASH output specify this parameter on stream object level.

x264 video codec settings profile. Possible values are high, main, baseline. Defaults to main.

For HLS or DASH output specify this parameter on stream object level.

Contains video codec parameters for advanced usage.

Attributes

x264 video codec settings profile. Possible values are high, main, baseline. Defaults to main.

Set of constraints that indicate a degree of required decoder performance for a profile. Consists from two digits. Possible values are: 30, 31, 40, 41, 42.

Context-Adaptive Binary Arithmetic Coding (CABAC) is the default entropy encoder used by x264. Possible values are 1 and 0. Defaults to 1.

Possible values are +bpyramid, +wpred, +mixed_refs, +dct8×8, -fastpskip/+fastpskip, +aud. Defaults to None.

One of x264's most useful features is the ability to choose among many combinations of inter and intra partitions. Possible values are +partp8x8, +partp4x4, +partb8x8, +parti8x8, +parti4x4. Defaults to None.

Defines motion detection type: 0 - none, 1 - spatial, 2 - temporal, 3 - auto. Defaults to 1.

Motion Estimation method used in encoding. Possible values are epzs, hex, umh, full. Defaults to None.

Sets sub pel motion estimation quality.

Sets rate-distortion optimal quantization.

Number of reference frames each P-frame can use. The range is from 0-16.

Sets full pel me compare function.

Sets limit motion vectors range (1023 for DivX player).

Sets scene change threshold.

Sets QP factor between P and I frames.

Sets strategy to choose between I/P/B-frames.

Sets video quantizer scale compression (VBR). It is used as a constant in the ratecontrol equation. Recommended range for default rc_eq: 0.0-1.0.

Sets min video quantizer scale (VBR). Must be included between -1 and 69, default value is 2.

Sets max video quantizer scale (VBR). Must be included between -1 and 1024, default value is 31.

Sets max difference between the quantizer scale (VBR).

Sets max bitrate tolerance. Requires 'bufsize' to be set.

For libx264 max_rate is specified in Mbps. For other codecs - in kbps.

Sets min bitrate tolerance (in bits/s). Most useful in setting up a CBR encode. It is of little use elsewise.

For libx264 min_rate is specified in Mbps. For other codecs - in kbps.

Tells the encoder how often to calculate the average bitrate and check to see if it conforms to the average bitrate specified.

For libx264 bufsize is specified in Mbps. For other codecs - in kbps.

Sets the scaler flags. This is also used to set the scaling algorithm. Only a single algorithm should be selected. Default value is 'bicubic'.

Specifies the preset for matching stream(s).

Set generic flags.

Possible values: mv4, qpel, loop, qscale, pass1, pass2, gray, emu_edge, psnr, truncated, ildct, low_delay, global_header, bitexact, aic, cbp, qprd, ilme, cgop.

Sets number of frames to look ahead for frametype and ratecontrol.

Applies to lcevc video codecs enhancements only.

There are six variants of lcevc_tune, according to the aim of the encodes. Depending on the chosen tuning, the encoder will combine optimal settings and parameters according to that goal. The settings are as follows:

SettingDescription
vqoptimizes for visual quality. Default.
vmafoptimizes for VMAF
vmaf_negoptimizes for the new VMAF NEG (No Enhancement Gain)
psnroptimizes for PSNR
ssimoptimizes for SSIM, MS-SSIM
animationan alternative to 'vq', optimizes for visual quality of animation

Applies to lcevc video codecs enhancements only.

Specifies the scaling mode for the base encoder picture in the LCEVC hierarchy. In combination with the associated rate control strategies, 2D, 1D and 0D influence the relative allocation of bitrate to the low-, medium- and high-frequency portions of the content.

ModeDescription
2Dtwo-dimensional 2:1 scaling. E.g. for a 1920x1080 video, base layer is 960x540. Default for resolutions of 720p and above.
1Dhorizontal-only 2:1 scaling. E.g. for a 1920x1080 video, base layer is 960x1080. This mode is recommendable at high bits per pixel (e.g. full HD above 5 Mbps) or low resolutions (e.g. 540p or below), especially for content with high amounts of relatively low-contrast high-frequency detail. Default for resolutions lower than 720p.
0DNo scaling. Currently this mode can be used exclusively for Native mode (see section 4.2.3). 0D with LCEVC (encoding_mode=enhanced) will be supported in a future release.

Applies to lcevc video codecs enhancements only.

Specifies whether to apply a uniform dithering algorithm.

If None is specified, no dithering is applied. Default for lcevc_tune psnr, vmaf and ssim.

If Uniform is specified, Uniform random dithering is applied. Default for lcevc_tune vq.

Applies to lcevc video codecs enhancements only.

Specifies the maximum dithering strength. Range: 0-10.

  • The default value is 4.
  • A value of 7-8 displays a more visible dither.
  • A value of 2-3 should be used for substantially imperceptible dither.

    Applies to lcevc video codecs enhancements only.

    Specifies the base QP value at which to start applying dither. Range: 0-51. Default: 24.

    Applies to lcevc video codecs enhancements only.

    Specifies the base QP value at which to saturate dither. Range: 0-51. Default: 36.

    Regardless of the base QP value, other low-level parameters make dithering adapt dithering strength settings based on frame luminosity (according to contrast sensitivity function) as well as presence of no-contrast plain graphics which would not benefit from dithering.

    Specifies the M adaptive downsampling mode

    ModeDescription
    disabledM adaptive downsampling disabled. Default for lcevc_tune=psnr, lcevc_tune=ssim and lcevc_tune=vmaf_neg.
    replaceM adaptive downsampling is applied equally to both residual surfaces. Default for lcevc_tune=vq and lcevc_tune=vmaf.
    separateM adaptive downsampling is applied separately to residual surfaces. Default for lcevc_tune=animation.

    Applies to lcevc video codecs enhancements only.

    Allows to increase or decrease the energy of high frequencies, with 0 being a preference for softer details. Default values are modified adaptively by the encoder if you do not specify anything.

    Applies to lcevc video codecs enhancements only.

    Allows you to modify the way in which full resolution details are separated from the mid-to-low frequencies that are passed as low resolution to the base codec. Default values are modified adaptively by the encoder if you do not specify anything.

    For HLS or DASH output specify this parameter on stream object level.

    note
    Note
    Set this value to 'hvc1' for H.265 encodings in order to enable correct playback on Apple devices.

    Enables HDR (high dynamic rate) to SDR (standard dynamic rate) conversion mode. Possible values: 0 or 1. Defaults to 0.

    HDR to SDR conversion can slow down transcoding significantly so standard price is multiplied by 2.

    Possible values are: aac, libfdk_aac, libvorbis. Defaults to aac.

    For HLS or DASH output specify this parameter on the stream object level.

    Defaults to 64.

    For HLS or DASH output specify this parameter on the stream object level.

    Defaults to 44100.

    For HLS or DASH output specify this parameter on the stream object level.

    Default value is 2.

    For HLS or DASH output specify this parameter on the stream object level.

    If set to 1, replaces audio in the output with a silent track.

    For HLS or DASH output specify this parameter on the stream object level.

    Contains a JSON object with audio mapping. Attribute names are output audio channels and values are audio channels from input.

    Example: {"c0" : "c2", "c1" : "c0"}

    The above specifies stereo output where channel 0 of the output is mapped to channel 2 of the input and channel 1 of the output is mapped to the input channel 0.

    Defaults to 'playlist.m3u8' for HLS and 'playlist.mpd' for DASH outputs. Please specify both filename and extension for custom playlist name.

    Contains a list of elements each describing a single view stream for adaptive streaming format. Use stream objects for HLS or MPEG-DASH outputs.

    Stream object is used with http-streaming formats (HLS and MPEG-DASH) and specifies a set of attributes defining stream properties. This is a subset of attributes working on a Format level for file-based output formats like MP4 or WEBM. These are size, bitrate, framerate, etc. There are a few attributes only used with Stream object listed below.

    Attributes

    Specifies custom file name for HLS or DASH chunk playlist.

    Segment duration to split media (in seconds). Refers to adaptive streaming formats like HLS or DASH. Defaults to 9.

    If set to 1, creates an #EXT-X-I-FRAMES-ONLY playlist for HLS output. Defaults to 0

    If set to 1, creates HLS chunks in fMp4 format instead of TS.

    If set to 1, creates playlist.m3u8 file in the DASH output. Use this to generate CMAF.

    Specifies if audio stream is a separate HLS folder or put into video chunks.

    If set to 1, creates audio as a separate HLS stream. Defaults to 1.

    By default HLS streams are put into sub-folders named video_1, video_2, etc. You can change this behavior by enabling this setting so all chunks and playlists are saved into the same folder. Defaults to 0.

    Moment in video (% from video duration) to create thumbnail at. Used with output: thumbnail.

    Interval in seconds between thumbnail images. Used with output: thumbnails.

    Specifies image format for 'thumbnail' or 'thumbnails' output. Possible values: png, jpg. Defaults to 'png'.

    Note: use "quality" parameter along with "image_format": "jpg" to specify image quality.

    Stitches thumbnails into a single lager sprite image. Possible values: 0, 1. Defaults to 0.

    Can be used in combination with columns param.

    You should set sprite param to 1 in order to use this setting.

    Contains object, specifying subtitles (closed captions) configuration. Contains sources - an optional array of subtitle objects for a closed captions stream. Each object should have source and language attributes. You can also include optional parameter copy, specifying if eia608 or eia708 closed captions should be copied to output stream. Copy is set to 0 by deafault which means closed captions won't be copied to output stream.

    Attributes
    Attributes

    URL to a file with subtitles. Supported formats are: .ass, .srt

    Specifies language for subtitles.

    For streaming formats like HLS or MPEG-DASH specify logo as an attribute of a stream object.

    It will be good idea to have different size logo images for output streams of different resolutions.

    To create dynamic logos, specify an array of logo objects, each containing a logo for a specific time range of the video.

    Supported image formats for logo are JPEG and PNG.

    Attributes
    This should be publicly available url.
    Image X position relative to the video left side. Specifying negative value for the X position changes relation to the right side.
    Image Y position relative to the video top. Specifying negative value for the Y position changes relation to the bottom of the video.
    Specifies watermark opacity. Possible values are floats in 0..1 range. Defaults to 1.
    If specified, sets the start time for the dynamic logo or watermark.
    If specified, sets the duration for the dynamic logo or watermark.

    Possible values: rgb, bt709, fcc, bt470bg, smpte170m, smpte240m, ycocg, bt2020nc, bt2020_ncl, bt2020c, bt2020_cl, smpte2085.

    Set this to 1 in order to preserve original value.

    For HLS or DASH output specify this parameter on stream object level.

    MPEG vs JPEG YUV range. Possible values: tv, mpeg, pc, jpeg.

    Set this to 1 in order to preserve original value.

    For HLS or DASH output specify this parameter on the stream object level.

    Possible values: bt709, gamma22, gamma28, smpte170m, smpte240m, linear, log, log100, log_sqrt, log316, iec61966_2_4, bt1361, iec61966_2_1, bt2020_10bit, bt2020_12bit, smpte2084, smpte428, arib-std-b67.

    Set this to 1 in order to preserve original value.

    For HLS or DASH output specify this parameter on stream object level.

    Possible values: bt709, bt470m, bt470bg, smpte170m, smpte240m, film, bt2020, smpte428, smpte431, smpte432, jedec-p22.

    Set this to 1 in order to preserve original value.

    For HLS or DASH output specify this parameter on stream object level.

    In Per-Title mode system runs a special analysis on each source video to find best encoding params for each scene. This allows to significantly decrease output bitrate without sacrificing quality. Currently available for the H.264 and H.265 codecs only.

    Possible values: 0 or 1. Defaults to 0.

    Enabling optimize_bitrate option multiplies price by 1.5x

    Limits the lowest CRF (quality) for Per-Title Encoding mode to the specified value. Possible values: from 0 to 51. Defaults to 0.

    Limits the highest CRF (quality) for Per-Title Encoding mode to the specified value. Possible values: from 0 to 51. Defaults to 51.

    Adjusts best CRF predicted for each scene with the specified value in Per-Title Encoding mode. Should be integer in range -10..10. Defaults to 0.

    Resulting CRF value can only be adjusted within the limits specified with min_crf and/or max_crf parameters in case they are applied.

    Tag value to pass through encoding system. The value specified for a tag is available as 'user_tag' in job status response.

    For HLS or DASH output specify this parameter on stream object level.

    If specified, enables DRM encryption for Widevine and Playready.

    Attributes

    When getting from CPIX response, should be decoded from base64 and encoded to hex

    When getting from CPIX response, you need to remove dash characters from it

    Should be specified in case present in DRM provider API response, e.g. in CPIX response this is explicitIV attribute in tag.

    When getting from CPIX response, should be decoded from base64 and encoded to hex

    If specified, enables DRM encryption for Fairplay.

    Attributes

    Example for EZDRM: skd://fps.ezdrm.com/;<kid>

    If specified, enables AES-128 encryption.

    Attributes

    URL, pointing to 128-bit encryption key in binary format.

    Specifies FFPROBE util version used to get video metadata. Used with 'output' set to 'metadata'. Default value is 4.1.5.

    Used with repack output only. Defaults to 1. If 0 is specified audio stream is removed from the output.

    Used with repack output only. Defaults to 1. If 0 is specified subtitles are removed from the output.

    Used with repack output only. Defaults to 1. If 0 is specified video stream is removed from the output.

    Used with repack output only. Defaults to 1. If 0 is specified metadata is removed from the output.

    Defaults to 0. You should use it on interlaced content only.

    Used with 'speech_to_text' output only. Defaults to 1. If 0 is specified transcript file is not generated.

    Used with 'speech_to_text' output only. Defaults to 'transcript.txt'.

    Used with 'speech_to_text' output only. Defaults to 1. If 0 is specified json file is not generated.

    Used with 'speech_to_text' output only. Defaults to 'timestamps.json'.

    Used with 'speech_to_text' output only. Defaults to 1. If 0 is specified SRT file is not generated.

    Used with 'speech_to_text' output only. Defaults to 'subtitles.srt'.

    Used with 'speech_to_text' output only. Defaults to 1. If 0 is specified VTT file is not generated.

    Used with 'speech_to_text' output only. Defaults to 'subtitles.vtt'.

    Used with 'speech_to_text' output only. Defaults to 'standard'.

    Adjusts the transcription process to balance between speed and accuracy, accepting three values:

    • standard: Provides a balance of speed and accuracy that is suitable for most solutions.
    • accuracy: Provides improved transcription accuracy at the expense of speed.
    • speed: Provides low cost, fast transcription.

    Used with 'speech_to_text' output only.

    Improve accuracy and control over transcription by defining the primary language for speech-to-text processing. Supports over 50 languages using he two-letter ISO 639-1 codes (e.g., 'en' for English, 'es' for Spanish). See full list of supported languages in our Speech-to-Text Tutorial.

    If not provided, the system automatically identifies the language, though direct specification is recommended for optimal accuracy.

    URL of an endpoint on your server to handle task callbacks.

    See Receiving Callbacks.

    Send callback on each subtask event (e.g. rendition queued or completed). Possible values: 0 or 1. Defaults to 0. Recommended value is 0 unless you really need to process each rendition separately.

    See Receiving Callbacks.

    Instructs encoding system to produce video streams with exact dimensions specified in the task, even if upscaling is needed to match specified width and height. Possible values: 0 or 1. Defaults to 0, so in case you specify output height greater than input these output formats or streams will be ignored. In case all outputs dimensions are greater than input and upscale mode is not enabled only first output format or stream is applied but with the dimensions of input video.

    Specifies encoding system version for this task. Possible values are 1 and 2. Defaults to 1.

    Request Example
    curl https://api.qencode.com/v1/start_encode2 \  
     -d task_token=b49e034d198262f1d5d15ed9f3cb8 \  
     -d payload="12345" \  
     -d query='{"query": {  
         "source": "https://your-server.com/video.mp4",  
         "format": [  
           {  
             "output": "mp4",  
             "destination": {  
               "url":"s3://us-west.s3.qencode.com/yourbucket/output.mp4",    
               "key":"abcde12345",    
               "secret":"abcde12345",    
               "permissions": "public-read"  
             },  
         "framerate": "29.97",  
         "keyframe": "25",  
         "size": "360x240",  
         "start_time": 10,  
         "duration": 20,  
         "audio_bitrate": 64 
        } 
       ] 
      } 
     }'
    params = """ 
      {"query": {  
         "source": "https://your-server.com/video.mp4", 
         "format": [  
           { 
            "output": "mp4", 
            "size": "320x240",
            "video_codec": "libx264"  
           } 
         ] 
       } 
     } 
    """ 
    task.custom_start(params)
    $params = '
      {"query": {  
         "source": "https://your-server.com/video.mp4", 
         "format": [  
           { 
            "output": "mp4", 
            "size": "320x240",
            "video_codec": "libx264"  
           } 
         ] 
       } 
     }'; 
    $task->startCustom($params);
    //TODO
    let query = {  
         "source": "https://your-server.com/video.mp4", 
         "format": [  
           { 
            "output": "mp4", 
            "size": "320x240",
            "video_codec": "libx264"  
           } 
         ] 
    }; 
    task.StartCustom(query);
    // Load API query from file.
    var transcodingParams = CustomTranscodingParams.FromFile("query.json"); 
    var started = task.StartCustom(transcodingParams);
    Response Example
    {
     'error': 0,
     'status_url': 'https://api.qencode.com/v1/status'
    }

    Getting Status of Tasks

    POST
    /v1/status

    Gets the current status of one or more transcoding jobs.

    The https://api.qencode.com/v1/status endpoint is a quick way to get feedback on whether the job is still running or has already completed.

    The master endpoint https://<master>/v1/status let's you get a more complete set of information about a job. This endpoint url is returned in the status_url attribute of the job's status object.

    Arguments

    You can use the task tokens returned from the /v1/create_task method to get the current status of several transcoding jobs at the same time.

    Returns

    Dictionary containing a status object for each requested task_token in task_tokens list.

    See status object attributes description below.

    Output Objects Structure
    Attributes
    Contains all information about task status. Example provided below.
    {
      "status": "encoding",
      "videos": [
        {
          "status": "encoding",
          "profile": null,
          "url": null,
          "percent": 0.0,
          "output_format": null,
          "storage": null,
          "meta": {
            "height": 720,
            "resolution_height": 720,
            "resolution_width": 1280,
            "resolution": 720,
            "width": 1280
          },
          "error_description": null,
          "error": false,
          "duration": "None",
          "tag": "video-0-0",
          "user_tag": null,
          "size": null
        },
        {
          "status": "encoding",
          "profile": null,
          "url": null,
          "percent": 7.1428571428571423,
          "output_format": null,
          "storage": null,
          "meta": {
            "height": 240,
            "resolution_height": 240,
            "resolution_width": 352,
            "resolution": 240,
            "width": 352
          },
          "error_description": null,
          "error": false,
          "duration": "None",
          "tag": "video-0-1",
          "user_tag": null,
          "size": null
        }
      ],
      "percent": 3.5714285714285712,
      "source_size": 69916569.0,
      "audios": [],
      "images": [],
      "error": 0,
      "duration": 596.52099999999996
    }
    Attributes

    See possible status values description below.

    downloadingVideo is being downloaded to Qencode server.
    queuedTask is waiting for available encoders.
    encodingVideo is being transcoded.
    savingVideo is being saved to destination location.
    completedThe transcoding job has completed successfully and the videos were saved to the destination.

    Endpoint to get most actual job status.

    You should always get job status using the endpoint specified as last value returned in status_url.

    Overall completion percent for the job. Currently refers only to 'encoding' status.

    Equals to 0 if there's no error and 1 in case of any error.

    Contains error message.

    List of objects, each containing output video status information.

    See video status object attributes description below.

    Attributes

    Possible values are listed in status attribute description

    Shows percent of completion for subtask in encoding state.

    Contains URL of the output video.

    System-defined tag value.

    User-defined tag value.

    See /v1/start_encode2 method tag parameter description.

    Equals to 0 if there's no error and 1 in case of any error.

    Only present if error = 1.

    List of objects, each containing audio-only output status information.

    See video status object attributes description below.

    Attributes

    Contains URL of the output video.

    System-defined tag value.

    User-defined tag value.

    See /v1/start_encode2 method tag parameter description.

    List of objects, each containing output image status information.

    See image status object attributes description below.

    Attributes

    Contains URL of the output image.

    System-defined tag value.

    User-defined tag value.

    See /v1/start_encode2 method tag parameter description.

    Request Example
    curl https://api.qencode.com/v1/status \  
       -d task_tokens=76682314a86ed377730873394f8172f2
     #Default: 
     status = task.status() 
    
     #Or use callback methods: 
     def my_callback(e):
       print e 
     def my_callback2(e):
       print e 
    
     task.progress_changed(my_callback)
     task.task_completed(my_callback2)
    $response = $task->getStatus();
    TranscodingTaskStatus response = task.getStatus();
    let response = task.GetStatus();
    var response = task.GetStatus();
    Response Example
    {
      "error": 0,
      "statuses": {
        "a2600fc63e4511e8ac870202b56d93e3": {
          "status": "completed",
          "status_url": "https://master-79109168ae0711e8b00baa831f35b3f7.qencode.com/v1/status",
          "percent": 100,
          "error": 0,
          "error_description": null,
          "images": [
            {
              "tag": "image-0-0",
              "profile": "5a5db6fa5b8ac",
              "user_tag": "320x240/5",
              "storage": {
                "bucket": "qencode-test",
                "type": "qencode_s3",
                "key": "output/320x240/12345_0.png",
                "format": null
              },
              "url": "s3://us-west.s3.qencode.com/qencode-test/output/320x240/12345_0.png"
            }
          ],
          "videos": [
            {
              "tag": "video-0-0",
              "profile": "5a5db6fa5b8ac",
              "user_tag": "480p",
              "storage": {
                "bucket": "qencode-test",
                "type": "qencode_s3",
                "key": "Outbound/480/12345.mp4",
                "format": "mp4"
              },
              "url": "s3://us-west.s3.qencode.com/qencode-test/output/480/12345.mp4",
              "bitrate": 1692,
              "meta": "{"width": 854, "resolution": 480, "height": 480}",
              "duration": "0.371511"
            }
          ]
        }
      }
    }

    Direct Video Upload

    OPTIONS, POST, PATCH
    /v1/upload_file

    Provides endpoint for direct video file upload using the TUS protocol for resumable uploads.

    Endpoint URL is returned with /v1/create_task method.

    You must add task_token value to the URL when performing upload, so the full URL is: https://<storage_host>/v1/upload_file/<task_token>

    You probably should not implement TUS protocol from scratch. We have tus uploads integrated in most of our SDKs, see examples in the right column. You can also use different client implementations from tus.io.

    note
    Note
    You should NOT call the method against main API server (api.qencode.com). We have special nodes responsible for file uploads and it's URL is returned with /v1/create_task method.
    Arguments

    Task token returned with /v1/create_task method.

    Should be specified as a part of URL (and NOT a POST parameter).

    Returns

    You can get file_uuid value from Location header returned on Step 2 of upload process (see "Call sequence example" in the right column).

    Call sequence example
    #replace with your API KEY (can be found in your Project settings on Qencode portal)
    API_KEY = 'your-api-key'
    file_path = '/path/to/file/for/upload.mp4'
    
    
    query = """
    {"query": {
       "source": "%s",
       "format": [
         {
           "output": "mp4",
           "size": "320x240",
           "video_codec": "libx264"
         }
       ]
     }
    }
    """
    
    client = qencode.client(API_KEY)
    
    task = client.create_task()
    
    #get upload url from endpoint returned with /v1/create_task and task_token value
    uploadUrl = task.upload_url + '/' + task.task_token
    print 'Uploading to: %s' % uploadUrl
    
    #do upload and get uploaded file URI
    uploadedFile = tus_uploader.upload(file_path=file_path, url=uploadUrl, log_func=log_upload, chunk_size=2000000)
    
    params = query % uploadedFile.url
    task.custom_start(params)

    Receive Callbacks

    Callbacks (also known as webhooks) are asynchronous notifications about job events. You can provide an endpoint on your server to receive task or subtask callbacks. In order to enable task callbacks you should specify a callback URL as a value of callback_url attribute of a query object. See query object description.

    Task callback is fired whenever any of the following events occur:

    • A new video is queued for transcoding. This occurs right after the /v1/start_encode or /v1/start_encode2 method is called.
    • A video transcoding job has completed.
    • An error has occurred during video processing.
    Request Example

    Setting Callback URL in a job request:

    {
      "query": {
        "callback_url": "http://your-server.com/task_callback_endpoint",
        "source": "...",
        "format": [
          ...
        ]
      }
    }

    The list of params sent with callback request is shown below

    Callback request params

    Please note: 'queued' and 'saved' event are sent for all jobs, 'error' event is only sent in case of an error.

    Payload passed by client application to /v1/start_encode2 method.

    Status object containing all job status attributes. See /v1/status API method reference.

    Please note, the input you receive with a callback request is application/x-www-form-urlencoded. An example of callback request sent to your server is shown below.

    status=%7B%22status%22%3A+%22completed%22%2C+%22videos%22%3A+%5B%7B%22profile%22%3A+null%2C+%22url%22%3A+%22https%3A%2F%2Fstorage.qencode.com%2Fe2079274fc1c4d12af5cf14affc9ba4e%2Fmp4%2F1%2F00f056d030fc11ebb4f46a9cb6debba1.mp4%22%2C+%22bitrate%22%3A+142%2C+%22output_format%22%3A+%22mp4%22%2C+%22storage%22%3A+%7B%22path%22%3A+%22e2079274fc1c4d12af5cf14affc9ba4e%2Fmp4%2F1%2F00f056d030fc11ebb4f46a9cb6debba1.mp4%22%2C+%22host%22%3A+%22storage.qencode.com%22%2C+%22type%22%3A+%22local%22%2C+%22zip%22%3A+%7B%22region%22%3A+%22sfo2%22%2C+%22bucket%22%3A+%22qencode-temp-sfo2%22%2C+%22host%22%3A+%22prod-nyc3-storage-do.qencode.com%22%7D%2C+%22format%22%3A+%22mp4%22%7D%2C+%22tag%22%3A+%22video-0-0%22%2C+%22meta%22%3A+%7B%22resolution_width%22%3A+256%2C+%22resolution_height%22%3A+144%2C+%22framerate%22%3A+%2230000%2F1001%22%2C+%22height%22%3A+144%2C+%22width%22%3A+256%2C+%22codec%22%3A+%22h264%22%2C+%22bitrate%22%3A+%2278946%22%2C+%22dar%22%3A+%2216%3A9%22%2C+%22sar%22%3A+%221%3A1%22%2C+%22resolution%22%3A+144%7D%2C+%22duration%22%3A+%2210.077%22%2C+%22user_tag%22%3A+%22144p%22%2C+%22size%22%3A+%220.179412%22%7D%5D%2C+%22status_url%22%3A+%22https%3A%2F%2Fapi.qencode.com%2Fv1%2Fstatus%22%2C+%22percent%22%3A+100%2C+%22source_size%22%3A+%2216.6585%22%2C+%22audios%22%3A+%5B%7B%22profile%22%3A+null%2C+%22url%22%3A+%22https%3A%2F%2Fstorage.qencode.com%2Fe2079274fc1c4d12af5cf14affc9ba4e%2Fmp3%2F1%2F00be24b230fc11ebbe666a9cb6debba1.mp3%22%2C+%22bitrate%22%3A+122%2C+%22output_format%22%3A+%22mp3%22%2C+%22storage%22%3A+%7B%22path%22%3A+%22e2079274fc1c4d12af5cf14affc9ba4e%2Fmp3%2F1%2F00be24b230fc11ebbe666a9cb6debba1.mp3%22%2C+%22host%22%3A+%22storage.qencode.com%22%2C+%22type%22%3A+%22local%22%2C+%22zip%22%3A+%7B%22region%22%3A+%22sfo2%22%2C+%22bucket%22%3A+%22qencode-temp-sfo2%22%2C+%22host%22%3A+%22prod-nyc3-storage-do.qencode.com%22%7D%2C+%22format%22%3A+%22mp3%22%7D%2C+%22tag%22%3A+%22audio-1-0%22%2C+%22meta%22%3A+%7B%22index%22%3A+1%2C+%22language%22%3A+%22und%22%2C+%22title%22%3A+null%2C+%22program_id%22%3A+null%2C+%22channels%22%3A+6%2C+%22bit_rate%22%3A+320000%2C+%22codec%22%3A+%22ac3%22%2C+%22sample_rate%22%3A+48000%2C+%22program_ids%22%3A+%5B%5D%7D%2C+%22duration%22%3A+%2230.0669%22%2C+%22user_tag%22%3A+%22audio%22%2C+%22size%22%3A+%220.459226%22%7D%5D%2C+%22duration%22%3A+%2230.017%22%2C+%22error_description%22%3A+null%2C+%22error%22%3A+0%2C+%22images%22%3A+%5B%5D%7D&callback_type=task&task_token=e2079274fc1c4d12af5cf14affc9ba4e&event=saved&payload=%7B%22fileName%22%3A+%22bbb_30s.mp4%22%7D

    Here's the urldecoded version:

    status={"status": "completed", "videos": [{"profile": null, "url": "https://storage.qencode.com/e2079274fc1c4d12af5cf14affc9ba4e/mp4/1/00f056d030fc11ebb4f46a9cb6debba1.mp4", "bitrate": 142, "output_format": "mp4", "storage": {"path": "e2079274fc1c4d12af5cf14affc9ba4e/mp4/1/00f056d030fc11ebb4f46a9cb6debba1.mp4", "host": "storage.qencode.com", "type": "local", "zip": {"region": "sfo2", "bucket": "qencode-temp-sfo2", "host": "prod-nyc3-storage-do.qencode.com"}, "format": "mp4"}, "tag": "video-0-0", "meta": {"resolution_width": 256, "resolution_height": 144, "framerate": "30000/1001", "height": 144, "width": 256, "codec": "h264", "bitrate": "78946", "dar": "16:9", "sar": "1:1", "resolution": 144}, "duration": "10.077", "user_tag": "144p", "size": "0.179412"}], "status_url": "https://api.qencode.com/v1/status", "percent": 100, "source_size": "16.6585", "audios": [{"profile": null, "url": "https://storage.qencode.com/e2079274fc1c4d12af5cf14affc9ba4e/mp3/1/00be24b230fc11ebbe666a9cb6debba1.mp3", "bitrate": 122, "output_format": "mp3", "storage": {"path": "e2079274fc1c4d12af5cf14affc9ba4e/mp3/1/00be24b230fc11ebbe666a9cb6debba1.mp3", "host": "storage.qencode.com", "type": "local", "zip": {"region": "sfo2", "bucket": "qencode-temp-sfo2", "host": "prod-nyc3-storage-do.qencode.com"}, "format": "mp3"}, "tag": "audio-1-0", "meta": {"index": 1, "language": "und", "title": null, "program_id": null, "channels": 6, "bit_rate": 320000, "codec": "ac3", "sample_rate": 48000, "program_ids": []}, "duration": "30.0669", "user_tag": "audio", "size": "0.459226"}], "duration": "30.017", "error_description": null, "error": 0, "images": []}&callback_type=task&task_token=e2079274fc1c4d12af5cf14affc9ba4e&event=saved&payload={"fileName": "bbb_30s.mp4"}

    And a bit more fancy version:

    status={"status": "completed", "videos": [{"profile": null, "url": "https://storage.qencode.com/e2079274fc1c4d12af5cf14affc9ba4e/mp4/1/00f056d030fc11ebb4f46a9cb6debba1.mp4", "bitrate": 142, "output_format": "mp4", "storage": {"path": "e2079274fc1c4d12af5cf14affc9ba4e/mp4/1/00f056d030fc11ebb4f46a9cb6debba1.mp4", "host": "storage.qencode.com", "type": "local", "zip": {"region": "sfo2", "bucket": "qencode-temp-sfo2", "host": "prod-nyc3-storage-do.qencode.com"}, "format": "mp4"}, "tag": "video-0-0", "meta": {"resolution_width": 256, "resolution_height": 144, "framerate": "30000/1001", "height": 144, "width": 256, "codec": "h264", "bitrate": "78946", "dar": "16:9", "sar": "1:1", "resolution": 144}, "duration": "10.077", "user_tag": "144p", "size": "0.179412"}], "status_url": "https://api.qencode.com/v1/status", "percent": 100, "source_size": "16.6585", "audios": [{"profile": null, "url": "https://storage.qencode.com/e2079274fc1c4d12af5cf14affc9ba4e/mp3/1/00be24b230fc11ebbe666a9cb6debba1.mp3", "bitrate": 122, "output_format": "mp3", "storage": {"path": "e2079274fc1c4d12af5cf14affc9ba4e/mp3/1/00be24b230fc11ebbe666a9cb6debba1.mp3", "host": "storage.qencode.com", "type": "local", "zip": {"region": "sfo2", "bucket": "qencode-temp-sfo2", "host": "prod-nyc3-storage-do.qencode.com"}, "format": "mp3"}, "tag": "audio-1-0", "meta": {"index": 1, "language": "und", "title": null, "program_id": null, "channels": 6, "bit_rate": 320000, "codec": "ac3", "sample_rate": 48000, "program_ids": []}, "duration": "30.0669", "user_tag": "audio", "size": "0.459226"}], "duration": "30.017", "error_description": null, "error": 0, "images": []}
    &callback_type=task
    &task_token=e2079274fc1c4d12af5cf14affc9ba4e
    &event=saved
    &payload={"fileName": "bbb_30s.mp4"}

    Supported Output Formats

    mp4Creates an MP4 output. Available codecs: libx264, libx265.
    webmCreates a WEBM output. Available codecs: libvpx, libvpx-vp9.
    advanced_hlsCreates an HLS (TS-based or fMp4-based) output. Available codecs: libx264, libx265. You can create an fMp4-based HLS version with setting fmp4 attribute to 1.
    advanced_dashCreates an MPEG-DASH (fMp4-based) output. Available codecs: libx264, libx265.
    webm_dashCreates an MPEG-DASH (webm-based) output. Available codecs: libvpx, libvpx-vp9.
    mp3Creates an MP3 (audio-only) output.
    hls_audioCreates an HLS (audio-only) output.
    flacCreates a FLAC (audio-only) output.
    gifCreates a GIF output. You can control fps with framerate attribute.
    thumbnailCreates a single image output. Supported image types are PNG and JPG (see image_format attribute). You can control jpeg quality with with quality attribute.
    thumbnailsCreates a multiple images with specified interval and .vtt file. Supported image types are PNG and JPG (see image_format attribute). You can control jpeg quality with with quality attribute.
    metadataCreates a JSON file with video metadata.
    speech_to_textAutomatically generates transcript and subtitles from the source media audio track, and stores them together in a folder.

    Error Codes

    • If the job is SUCCESSFUL, 'error' param will be set to 0 in the response.
    • If the job has FAILED, response contains 'error' param set to a value from the Error Code table below.
    ERROR CODEVALUEDESCRIPTION
    ERROR_OK0Operation completed successfully
    ERROR_SERVER_INTERNAL1Internal server error occurred, please contact support@qencode.com.
    ERROR_BAD_API_KEY2Your API key did not pass validation.
    ERROR_API_KEY_NOT_FOUND3We can't find such API key in our database.
    ERROR_BAD_TOKEN4Token did not pass validation.
    ERROR_TOKEN_NOT_FOUND5We can't find such token in our database
    ERROR_SERVICE_SUSPENDED6Service is suspended. Please log into your account and clear any billing issues associated with your account. If you have any questions, contact support@qencode.com.
    ERROR_MASTER_NOT_FOUND7Internal server error occurred, please contact support@qencode.com.
    ERROR_SYSTEM_BUSY8We don't have enough resources to process request at the moment. You should retry it in a number of seconds specified in response.
    ERROR_BAD_PAYLOAD9Payload value is too long.
    ERROR_PROJECT_NOT_FOUND10Server issue. Internal server error occurred, please contact support@qencode.com.
    ERROR_BAD_PROFILE11Profile field value does not pass validation.
    ERROR_PROFILE_NOT_FOUND12We can't find specified profile in our database.
    ERROR_BAD_TOKENS13Task_tokens field value does not pass validation.
    ERROR_FIELD_REQUIRED14Value for field specified in response is required in request.