Lidar would need to gather data from the bot and stream it back to the laptop, so would need to use something other than radio. Most probably wifi or bluetooth. If that is allowed then it might be a viable project for me.
Lidar would need to gather data from the bot and stream it back to the laptop, so would need to use something other than radio. Most probably wifi or bluetooth. If that is allowed then it might be a viable project for me.
Depending on the level of processing power required, you could do Lidar with a Raspberry Pi (One of the new ones) or stepping up a Intel Compute stick, or I suppose if you need an actual computer in the robot then of of the Intel NUC's. I say this because I'd be worried about the reliability of a Wifi signal. Not sure why but despite our TX's and Wifi both being 2.4Ghz, the former seems far more reliable.
Cramming that kind of tech into a FW would be tricky. But I'd love to see it happen!
A cheap smartphone would probably be capable of doing all that in theory at least - decent processing power, WiFi, low power consumption, very compact with It's own power source, and lightweight with reasonable shock resistance. I wouldn't know where to start programming all that.
There was also a robot on robot wars extreme that had automated side axe's but have you checked the new robot wars rules
For computation sake I would need something fairly fast. The Odroid-C2 looks pretty good. With a Mali 450 GPU I can use Open-CL to speed up a lot of calculations. GPU is very good for processing big data sets like a lidar point cloud. It's CPU is 2ghz quadcore too, which is an improvement over the Raspberry Pi 3. I'll no doubt be able to find suitable hardware if I did go ahead with this, just wondering if its worth the fuss
I was considering this for a UK ant so I'll put down my thoughts.
My first thought it that being an ant there was no way I could get sufficient processing and sensers on it to fight effectively. SO I started going down the offboard control route. I settled on a kinect 2 camera. Offboard gives a much more stable datastream than a robot being thrown around the arena and much easier to process stuff like the robots position in the arena. The 3D imaging capability of the Kinect 2 simplifies the image processing methods needed as you can reasonably assume a non flat bit of floor is probably a robot/hazard or wall. This gets around bots that might be very hard for the system to visualise. It also would have allowed me to throw a much bigger chunk of processing power at the problem using a desktop as opposed to a single board PC.
The eventual plan was to minimise the information needed for the robot to feed back to the point none is needed. Effectively this would allow the control system to control any bot not just one built for autonomy provided it had some kind of tactic for each.
The only issue is kinect 2 cameras can have trouble seeing through polycarbonate so there might have been the possibility of having to place the camera within the arena, not much risk in ants but more so in higher weight classes. On the plus side you can rig it to capture some nice battle footage too!
I am keeping an eye on this thread regarding building autonomous robots.
Would be so cool to program a game plan for fighting the other robot, which you also have programmed to follow a game plan for each type of bots.
I can only work with arduino though. So it would probably just be ultrasonic and infrared sensing.
But I love the idea of external processing with the Kinect camera. Unfortunately I can't figure something like that out unless I got 10 years Lol.
Bookmarks