Loading

CognitiveCast™

Product and Solution Overview

CognitiveCast™ is a Cognitive Mill product that offers you a brilliant solution on how to engage your viewers with celebrity-related content.

The system analyzes the video content and identifies main and secondary characters without human involvement, leaving aside all the unimportant accidental individuals and extras.

You get a fully automatically generated list of important characters classified as main (cast_movie) and secondary (cast_accidental_free) characters, only those that are essential for further analysis and navigation.

By API integration of CognitiveCast™ with any celebrity databases, you can easily provide additional information on the names of the cast members as well as find other video content with these celebrities available on your platform to increase the audience’s interest and retention.

Our system finds all the scenes in the video timeline where the main and secondary characters appear and organizes this information into accurate and comprehensible clusters.

After running the parent Cast meta process, you get a meta.json file that contains all the required metadata for further integrations via the API and processing with third-party software.

You can also preview representative frames with main and secondary characters and create a highlight reel with any selected character using our media generation tool called MediaMill™.

Benefits:

  • You get a JSON file with clustered metadata about main and secondary characters for further integrations via the API and processing with third-party software.
  • CognitiveCast™ automatically processes any type of content without additional configurations. You can analyze sports events, movies, series, TV shows, or news.
  • You can easily integrate our cloud ‘robot’ with your systems and software via the API.
  • You can create a highlight reel including only the scenes where the selected character appears with the help of our media generation tool MediaMill™.
  • You can interact with the platform either via the UI or via the API.

Let’s review what is included in the output meta.json file.

Output Metadata

A meta.json file is the core file that contains metadata of the video for further integration and processing with third-party systems and software or with MediaMill™ for generating a reel with the selected characters and scenes.

A Cast meta.json file you get after running the Cast meta process contains all the required metadata for further API integrations and processing.

The Cast meta.json file can nominally be divided into three following parts:

1. Cast. Distinguishing properties of each unique cast ID of the asset.

2. Cast Listings. Lists of cast IDs grouped under the categories of importance, main and secondary characters.

3. Segments. Elements with segment type shot display the segment’s start and end properties, cast IDs of the main and secondary characters appearing in the shot, and the time marker of the representative frame with them.

Let’s review each part in more detail.

Cast

Cast displays the following properties of each main and secondary character:

  • Box. The coordinates of the box marking the cast member's face in the representative frame. The first two values are the X and Y coordinates of the top-left corner, and the last two values are the X and Y coordinates of the bottom-right corner of the frame. These values are given as a percentage of the whole video frame’s width and height correspondingly and can be 0 to 1.
    We currently provide one box per character for the whole video, but we can select and mark more frames per request.
  • Cluster_length. An additional characteristic describing the character’s relative presence in the video timeline, based on the number of their quality faceprints within frames.
  • Descriptor. The combination of 512 unique figures that describe landmarks of a human face. For example, by comparing descriptors of characters and those of the celebrities in the database, you can find and identify the celebrity in the cast.
    Contact us for more information about how to compare descriptors.
  • ID. The unique identifier of each character.
  • Ms. The time marker of the representative frame in the video timeline.
  • Additional parameters. Weight, weight_vector, including length_proportion_relative and cluster_mean_area_relative, are the parameters that help the system differentiate main characters from secondary characters. These parameters are for internal usage.

Below is an example of the Cast properties for one secondary character.

{
  "cast": [
    {
      "box": [
       0.5746442675590515,
       0.28279876708984375,
       0.6929214596748352,
       0.594132125377655
      ],
      "cluster_length": 1051,
      "descriptor": [
        -0.3757123649120331,
        -1.528092384338379,
        -0.4923481047153473,
        -2.3364672660827637,
        1.4594130516052246,
        -0.5204954147338867,
        1.172236442565918,
        0.7864460945129395,
        1.648589849472046,
        [...]
        0.27942773699760437,
        0.4237338900566101,
        -0.3347760736942291,
        0.9878032803535461
      ],
      "id": 1,
      "ms": 689440,
      "weight": 0.03386,
      "weight_vector": {
        "cluster_length_relative": 0.029,
        "cluster_mean_area_relative": 0.0533
      }
    },

Cast Listings

IDs of all cast members, main and secondary characters, are listed under the following two keys:

  • Cast_movie. A filtered list of IDs that includes only main characters in movies and series, newscasters and TV show hosts (cast_movie may also include guests if it’s an interview), key sportsmen of a sports event, etc.
  • Cast_accidental_free. A list of all secondary characters, excluding extras. These are supporting characters in movies and series, guests of TV shows, news reporters, sportsmen that didn't show up as often as the key ones but still played an important role in the event.

Below is an example of this part of the meta.json file.

"cast_accidental_free": [
  1152,
  1,
  2,
  897,
 [...]
  1012,
  122,
  380,
  253
],
"cast_movie": [
  177,
  43,
  13
],

Segments

Each segment contains the following metadata:

  • Type. Type of segment. The type of segment specified for CognitiveCast™ is the operator’s shot.
  • Start. The start time of the segment. Ms stands for milliseconds and shows when the episode appears in the video timeline.
  • End The end time of the segment. Ms stands for milliseconds and shows when the episode appears in the video timeline.
  • Repr_ms. The time marker of the representative frame within the current shot. The frame that best describes the current shot.
  • Cast_movie. The list of IDs of the main characters that appear in the current shot.
  • Cast_accidental_free. The list of IDs of the secondary characters that appear in the current shot.

Below is an example of the Segments properties for one shot.

{
  "segments": [
    {
      "cast_accidental_free": [],
      "cast_movie": [
        43,
        13
      ],
      "end": {
        "ms": 66160
      },
      "repr_ms": 65360,
      "start": {
        "ms": 61880
      },
      "type": "shot"
    }
  ]
}

Demo Case

This guide shows you how to generate a Cast meta JSON file for further integration and processing via the UI at run.cognitivemill.com.

Before processing your video:

1. Sign in to your account or register if it’s your first visit.

2. Make sure you have the Cast meta quota to get your video processed.

New users are provided with trial quotas. If you don’t have the required quotas, contact us at support@aihunters.com to get them.

When you are all set, follow the instructions:

1. Click Run Process on th top navigation bar.

The Run a New Process page opens.

2. Select Cast meta from the drop-down list of the Process type field.
Note that you can select only those processes for which you have quotas.

3. In the Title field, enter a name for your process.

4. In the Video source field, either paste a link to the video you want to process — the default Use video link option — or click the field > select Use video file > click Select file > pick a file from your device to upload.

5. (Optionally) Clear the checkbox to cancel the creation of a transcoded proxy file. The checkbox is selected by default.
A transcoded proxy file is lightweight, so it is easier and faster for the visualizer to open it.

6. Click the Run process button.
The processing of the video has started. You can follow the progress on the Process List page, where your process appears in the current status.

When the status changes to Completed, you can:

  • Download metadata for further processing with third-party systems and software.
  • Preview the scenes with the specified character(s) in the Cognitive Mill visualizer and generate a trailer with the selected character(s) using MediaMill™.

To download metadata:

1. On the Process List page, click the three vertical dots icon next to your process.

2. In the pop-up menu that appears, click Get meta.json.

The meta.json file has been downloaded to your device.

To preview the representative frame with the specified character:

1. On the Process List page, click the title of the process to open the Cognitive Mill visualizer.

2. In the Show Representative Frame section of the side navigation bar from the drop-down list, select the character's ID to view the representative frame.

The representative frame for the selected character is displayed.

Additionally, you can preview the scenes with the specified character.

1. In Timelines, select the character's ID.

The character's timeline appears under the video.

2. Click the Play button above the timeline.

To generate a highlight reel with the selected character:

1. In Timelines, select the character's ID.

2. Click Add to Editor above the timeline.

3. Click Run Media Mill.
The Media Mill page opens.

4. Enter a title for your process.

5. (Optionally) Select the checkbox to create a lightweight transcoded proxy file, which is easier and faster for the visualizer to open.

6. Click Run Process.

When the status of the process changes to Completed, you can preview and download the output video by clicking the three vertical dots icon.

7. In the pop-up menu that appears, click Get out media.

The video is opened in a separate tab for preview.

8. Click the three vertical dots icon in the bottom-right corner of your screen.

9. Click Download in the menu that appears. The video has been downloaded to your device.

Current Challenges

  • Audience as cast.
    Sometimes audiences in a TV show, sports event, or extras in a movie can be mistakenly classified as cast (cast_accidental_free) if the same individuals are shown quite often in the close-up view. It may happen if the total time when such random people are shown in the video can be compared to that of the TV show guests or secondary characters.
  • Portraits, photos, and statues.
    Portraits, photos, and statues of humans can be challenging for the robot when they are emphasized, shown several times in the video, and remain in focus for more than five seconds. As they display a human face, the robot’s eyes catch them, and the long time in focus may lead the system to the decision to classify them as secondary characters.
  • Characters wearing glasses and face masks.
    When a character is wearing a face mask in a number of scenes and then appears without it, the system may identify them as two different characters. The same thing may happen when a character is wearing glasses.

We’re already working on the improvement of our robot’s eyes and decision algorithms so that in the following versions of CognitiveCast™ we won’t face these challenges anymore.


We use cookies to ensure that we give you the best experience on our website. Read cookies policies.