
Preparing to test the application on a mock chicken in the lab.
Author: Jackson Bumgarner | Major:Computer Science| Semester: Spring 2025
I am Jackson Bumgarner, a Computer Science major. Throughout the spring semester in 2025, I have been working within Dr. Ngan Le’s AI Computer Vision lab alongside Dr. Khoa Vo. Under their mentorship I have created an application which can take live color-depth (RGBD) videos of poultry pens, put the footage through an image processing pipeline, and return a weight estimation of every chicken visible in the video. This application relies on multiple computer vision AI models to process image data from the camera, models which I also had to gather data for and train. This application would allow for poultry farms to greatly streamline the process of weighing a flock, as current methods are time consuming and labor-intensive.
In previous semesters I worked to gather visual data to train the AI models with. This involved me and Dr. Vo travelling to the Poultry Science Feed Mill to record RGBD videos of broiler chickens in their pens over the course of several months. Using these videos, we were able to train two AI models: one built from the YOLOv8 segmentation model, which is used to find and isolate all broiler chickens visible in a given frame; and another utilizing a Kernel Point Convolution (KPConv) operation in order to estimate the weight of a given chicken. The implementation of the KPConv model I worked with was created by Dr. Vo, and takes in 3D point clouds made from the RGBD video as input. The process of training the two models took several months, but we got them to the point where KPConv could adequately predict a chicken’s weight and YOLO could accurately locate and track multiple chickens between frames.
The processing pipeline itself is split into four sections:
- While the RGBD camera is recording, a frame is taken from the live feed and split into depth and color images.
- The color image is given to the YOLO model, which then segments the image and locates all of the chickens in the frame. Each chicken is also given an ID that is tracked between frames.
- Next, the depth and color images are used to make 3-dimensional point clouds. The depth information is used to give the points height, while the output from YOLO is used to remove the background and other birds from the depth and color images. This ensures that the point cloud generated for a chicken only contains information from the area the chicken occupies. A unique point cloud is generated for every chicken detected by YOLO in the frame.
- The point clouds are given to the KPConv model as input, and the model outputs an estimated weight for each in kilograms. The estimated weight is then drawn on the color image and displayed to the user.
I have spent the last two semesters working on the application, managing to get it fully functional in late January. We spent two months after that testing the pipeline with various videos from our training dataset and going back to the Feed Mill in order to do a live test. The results from these experiments caused us to make improvements to the code, such as enhancing the performance when a large number of chickens are in the frame and increasing the readability of the user interface. I also began work on a paper discussing the project alongside Dr. Vo, and I plan on staying with the AI Computer Vision lab for a few months after I graduate to help prepare it for submission to the Smart Agricultural Technology journal.
The OUR Grant gave me the ability to delve into a rapidly evolving research field and create a useful tool for poultry farmers. This has given me valuable experience when it comes to creating large software applications, which will make me more prepared for my future endeavors in software development. My current plan is to go into industry once my work with the Dr. Le and Dr. Vo has finished, then try to save up money for a few years so I can go get my master’s degree.