What is it?
People have always been interested in creating machines that do work more efficiently, but for centuries they only produced single-purpose devices. During WWII this changed. Alan Turing was tasked with deciphering messages encrypted by the German "Enigma Machines," and the results of his work were profound. Instead of stopping at a code-breaking device, he invented the modern computer. These were distinguished from their predecessors by the ability to follow any series of steps given to them.
Over the years, programmers got tired of punching holes into cards in order to tell their computers what to do, so the industry moved toward text-based code. Some of the first languages were procedural, meaning an author would tell the machine what to do step by step. More recently, object-oriented languages have become popular and allowed programmers to design their code more intuitively. For example, the code that controls an LED may be sectioned off from the code that controls a timer, rather than having it all in one spot.
While languages were evolving, computer scientists made advances in how to actually use them. One of the most profound technologies is deep learning, which has sparked innovation in areas ranging from speech recognition to autonomous driving. Basically, a couple lists are created: one telling how a bunch of nodes impact data, and one telling how those nodes are connected. The nodes are organized into groups, called layers, and data passes through the layers sequentially. For example, the input layer may be told how many bedrooms, bathrooms, and kitchens a house has, and the output layer may predict the cost of the house. As the number of layers increases, these "neural networks" can handle more complex data.
How do we do it?
All FRC teams place a RoboRIO on their robot. These small controllers execute whatever code was sent to them from a Windows PC. In 2017, the Cyborg Cats began aggregating reusable Java code into a library. This includes code for controlling swerve drive trains with field-oriented Xbox controls. In addition, our lead programmer has set up some Java code to interact with a Leap Motion Controller and provide an extra set of controls for the robot.
Though many teams use encoders for closed-loop control, 4256 will be using a ZED Depth Sensing Camera to get visual odometry feeedback. Since the RoboRIO is not powerful enough to run the ZED SDK, we have a Jetson TK1 on our robot that interacts with the rest of our systems through the NetworkTables.
We do all of our vision processing using the OpenCV and numpy libraries in Python. These widely-accepted tools give our programmers experience that they can take with them into industry. Our head programmer is also experimenting with caffe2, TensorFlow, and Keras to add machine learning to the team's portfolio.
One ambitious member of 4256 has created an iOS app which can be used to synchronize scouting data during competition.