Ingredients 🙂 3D printer Motors Touchscreen with Raspberry-pi Leap Motion (for gesture recognition) Pressure Sensor (to know when the bowl is on the turn table) Kinect (for facial response feedback, mixing monitoring) So, this would be the coolest mixer everrr (lol)! Would 3d print the housing and the mixing blades. Would self-clean (maybe) and automatically mix… Continue reading Autonomous Mixer
Category: Computer Vision
Autonomous Smart Faucets
So, i want to make a device that will turn on and off the bathroom sink for me… Basically I’m too lazy to turn off the sink while i’m brushing my teeth. So, I want to connect a RP to a Kinect/leapmotion to a couple motors that can move the knobs. Then it would be… Continue reading Autonomous Smart Faucets
TurboJpeg to get yuyv images
This uses the turboJpeg library to do a decoding of a compressed jpeg to yuyv. TurboJpeg already converts the jpeg image to YUYV which is YUV 4:2:2 which is totally awesome since that is what we want!! Can get libjpeg which comes with turboJpeg here and then do: sudo dpkg -i libjpeg-turbo-official_1.4.1_i386.deb To compile your stuff check… Continue reading TurboJpeg to get yuyv images
My Attempt
The main issue is that of white on white which I can’t seem to get past other than extreme changes to the color as described in an earlier post. Probably, if that turns out to be a thing that would just be applied if I couldn’t see any posts and only if I believed that… Continue reading My Attempt
Breaking Camouflage
I’ve found two methods for breaking camouflage (from this paper) 1. Multiple Camouflage Breaking by C0-Occurrence and Canny 2. Convexity-based Visual Camouflage Breaking So, I’m starting to hope we really don’t need this… CloudSight doesn’t even work given the images that the Canny paper (1) can deal with. So, CloudSight must not be too cutting… Continue reading Breaking Camouflage
Structural Analysis
The next step would be to look into the structural analysis in opencv and convexity defects… Nevermind…. that is after you have the points do you do this. I need the points. It might be useful for identifying hands or fingers, but don’t need it for goal posts.
Competitors Goal code
The NUbots goal detection code uses something called RANSAC. I wonder if they have tried their algorithm on a white background yet… It wouldn’t seems so since they are still using a LUT which I don’t think will work with white on white. Of course this might be their old code from last year.
Breaking Countershading
So, I just had to play around with the colors of the picture that Canny couldn’t detect (and still can’t) but now there is a distinct green color for the white “goal post”! That could then be id’ed by blob detection which we have now. However, how to do this programmatically is beyond me!… Continue reading Breaking Countershading
Goal net
So, I grabbed some images from robocup 2014 that we took of the goal with the netting. One has no people behind it and the other had people behind it. I ran both Canny and Hough on it to see if the netting would stand out. Canny with people behind the net: Canny with no… Continue reading Goal net
White Goal Post Experiment
As you probably can’t see is the fact that I have placed a white pvc pipe in an orange vice grip and put in front of a white background. I tried running the image through opencv’s HoughLinesP (image below) and Canny and neither can extract the pipe. However, I’m not surprised as when I zoom… Continue reading White Goal Post Experiment