Analyze images for drugs, suggestive or explicit material with this custom operation, powered by Clarifai.
Speaker 0: The Directus AI image moderation operation helps you build safer and more positive applications powered by Clarify. It will take in an image and tell you whether there is any drugs, suggestive or explicit material, or gore in the images, and then you can take action based on what is right for your application. So we're gonna build that together today. So we are going to create a new flow that will trigger whenever a new image is uploaded. We'll call this moderation, image moderation.
Lovely, and we will run this whenever a file is uploaded. Now this will run on every file that's uploaded regardless of type. So in the real, you will want to add a condition that checks that this is indeed an image or perhaps based on a certain folder maybe just end user upload folders stuff like that. The documentation for this operation shows you how to set that up, but for now, I'm going to assume every file is an image and every file wants this moderation check. So we have this file upload, and we are gonna go ahead and use our AI image moderation.
There it is. AI image moderation operation. Now we want to provide our Clarify API token and provide a full file URL. So https tunnel.orws.i0 /asset/trigger.key which is the ID of the file whose upload started this flow, and of course you'll want to replace this with your actual, Director's project URL. There's also a threshold here, which will change which items are flagged.
So you will always get back a raw response between 0 and a100 of the confidence that either drug drugs suggestive or explicit material or gore is present, but there's an additional section called flags, which makes it easier to take action. This threshold changes what is needed for something to appear as a flag. In fact, I just want to show you how this works because what happens next is very very specific. So, I've got an image here of a needle with some white powder. So, sorry in our file module here we'll just go and update upload that.
So you see that's the image there, there is definitely some drugs in that image right. So we're gonna go over to our flow and and we're gonna look at the log, and we see here that in the payload, there's a very very very high chance that drugs were present. It's not suggestive. It's not gore. It's not explicit, and because that number is above our threshold this appears as a flag.
Now in most episodes in this series, I build a complete project, but this one is really up to you. You might choose instead of an action, which is a non blocking flow, you may choose to block it, and if and then you add a condition on to the end of the moderation, and if there's something that's flagged, you may stop it being uploaded at all, or you may flag it a certain way, change a status on the file field and stop it displaying to end users or display a warning. You may choose to email moderators to come in and do a manual check over. You may wanna send an email to the user and so on and so forth. So, hopefully, you can start seeing the applications of this image moderation operation.
I'm gonna stop here in this video because it gets very specific after this, very implementation specific. But I hope you find loads of value in it. It's really, really powerful, and hopefully, you'll agree quite nice to set up. So I'll see you in the next video. Bye for now.