You have a platform where people can upload images or videos, but sometimes those images might contain explicit content that could be triggering for some users. This is where Cloudinary and Amazon Rekognition come to save the day!
Cloudinary is an image and video management service that allows you to store, transform, and deliver your media assets securely. It also has a powerful AI moderation add-on from Amazon Rekognition that can automatically detect explicit content in images or videos.
Here’s how it works:
1. Sign up for Cloudinary and create an account (if you haven’t already).
2. Create a new project and upload your media assets to the platform.
3. Enable Amazon Rekognition AI moderation add-on in your Cloudinary settings.
4. Set up a threshold score for explicit content detection based on your preferences.
5. Configure notifications or actions that will be triggered when explicit content is detected (e.g., email notification, flagging the user’s account).
6. Test and monitor the system to ensure it’s working as expected.
7. Enjoy a safer platform for your users!
Now Let’s jump right into some code examples using Cloudinary Node.js SDK:
1. Upload an image or video to Cloudinary:
// Import the Cloudinary Node.js SDK and assign it to the variable 'cloudinary'
const cloudinary = require('cloudinary').v2;
// Configure the Cloudinary SDK with your cloud name, API key, and API secret
cloudinary.config({
cloud_name: 'your-cloud-name',
api_key: 'your-api-key',
api_secret: 'your-api-secret'
});
// Upload an image to Cloudinary and enable Amazon Rekognition AI moderation add-on
// Use the 'await' keyword to ensure the upload is completed before moving on to the next line of code
const result = await cloudinary.uploader.upload('path/to/image', {
// Set up a tag for explicit content detection
tags: ['explicit'],
// Set up additional parameters for the upload, including enabling the Amazon Rekognition AI moderation add-on
context: {
// Enable the Amazon Rekognition AI moderation add-on
rekognition_moderation: true,
// Set the maximum number of results to return (optional)
max_results: 10,
// Set a threshold score for explicit content detection based on your preferences (default is 75%)
threshold: 95
}
});
2. Retrieve an image or video from Cloudinary and check if it contains explicit content using Amazon Rekognition AI moderation add-on:
// This script retrieves an image or video from Cloudinary and checks if it contains explicit content using Amazon Rekognition AI moderation add-on.
// Define a constant variable "result" and use the "await" keyword to wait for the cloudinary.url() function to return a value.
const result = await cloudinary.url('path/to/image', {
transformation: [{
width: 'auto', // Resize the image to fit within a container with auto height
crop: 'fill' // Crop the image to fill the container while maintaining aspect ratio
}],
resource_type: 'video', // Retrieve video metadata instead of an image URL (optional)
secure: true, // Use HTTPS for added security (default is false)
version: 'signed_url' // Return a signed URL that can be used to access the media asset directly from Cloudinary (optional)
});
// Retrieve the metadata from the "result" variable and store it in a constant variable "metadata".
const { metadata } = result;
// Check if explicit content was detected using Amazon Rekognition AI moderation add-on.
if (metadata.moderation && metadata.moderation.results[0].labels['explicit']) {
// Explicit content was detected, trigger notification or action based on your preferences.
} else {
// No explicit content was detected, proceed with normal processing.
}
That’s it! With Cloudinary and Amazon Rekognition AI moderation add-on, you can protect your users from unwanted explicit content while still providing a seamless media management experience.