Austin McEver
Welcome to my page :) I have just completed my PhD with the Vision Research Lab at the University of California, Santa Barbara after 5 years of computer vision research. My dissertation focuses on enhancing object detection of invertebrate species in a challenging, partially labelled dataset of underwater video that I have made public. I am also widely interested in other machine learning (ML) problems, and I enjoyed enhancing Facebook's recommender system for video during my summer internship in 2021.
Interests: weak supervision, partial supervision, computer vision, recommender systems, machine learning, deep learning
Contact: mcever ⓐt ucsb.edu
Education
PhD in Computer Science at University of California, Santa Barbara. Class of 2022
MS in Computer Science at University of California, Santa Barbara. Class of 2021
BS in Computer Science at University of Tennessee, Knoxville. Class of 2017
Professional Experience
November 2022 - January 2023 Meta Platforms Inc, Research Scientist - ML Generalist
Hired as team member to improve ML modeling, pipelines, web services, and software systems
Spent three days as full time employee before 13% of workforce laid off in November
June 2021 - September 2021 Facebook, Software Engineer Intern - Machine Learning
Enhanced Facebook video recommendation modeling in cloud computing environment, driving performance (e.g. watch time) by 1% via A/B testing experiments resulting in full time job offer
Committed >1,000 lines of code to improve Watch recommendations by increasing performance on low traffic video with data driven optimization techniques and software development
Fall 2018 - Present: Vision Research Lab, Graduate Student Researcher
Summer 2018: Mayachitra Inc, Computer Vision Research Intern
Collaborated with senior researchers to participate in the Defense Innovation Unit Experimental (DIUx) xView detection challenge in overhead satellite imagery
Experimented with adapting the YOLOv3 convolutional neural network to optimize for the DIUx's xView public dataset
Summer 2017: OSIsoft LLC, Research Intern
Connected OSISoft's PI System data infrastructure to Esri ArcGIS to facilitate a real time data visualization of CURENT's real time power grid simulation
Compared the OSISoft PI System and Esri ArcGIS visualization with custom Python visualization software developed during time with CURENT
January 2016 - May 2017: CURENT UTK, Undergraduate Researcher
Created custom visualizations for power system simulations using wxPython
Traveled to Southeast University in Nanjing, China to assist in developing a genetic algorithm to find solutions to microgrid power system simulation stability issues
Publications
R. McEver. Detection and Segmentation Using Less Supervision. Doctoral Dissertation.
R. McEver, B. Zhang, C. Levenson, A S M Iftekhar, B.S. Manjunath, “Context-Driven Detection of Invertebrate Species in Deep-Sea Video”, CVPRW 2022. IJCV 2023.
R. McEver, B. Zhang, and B.S. Manjunath. Context-Matched Cut-and-Paste Collage Generation for Object Detection. In review with IEEE Transactions on Multimedia.
S. Haque and R. McEver. “Box Prediction Rebalancing for Training Single-Stage Object Detectors with Partially Labeled Data”. NeurIPS 2022 Workshop.
A S M Iftekhar, S. Kumar, R. McEver, S. You, and B.S. Manjunath. GTNet: Guided Transformer Network for Detecting Human-Object Interactions. To be presented at SPIE PRT 2023 Oral.
R. McEver, B.S. Manjunath, “PCAMs: Weakly Supervised Semantic Segmentation Using Point Supervision”, arxiv, Pre-print 2020. Arxiv.
Research Projects
Marine Applied Research and Exploration (MARE) has collected hundreds of hours of video using their unmanned, underwater, remote-operated vehicles (ROVs). In order to better survey and understand life in California's costal waters, MARE has annotated each video with species and substrate labels.
My implementation of a Context-Driven Detector enables detection, tracking, and counting of invertebrate species with partial labels, while simultaneously generating temporal labels for substrates present in the MARE's videos as demonstrated on the soon-to-be public Dataset for Underwater Invertebrate Analysis (DUSIA).
Context-Matched Mosaic Generation for DUSIA
It can be difficult to collect images for training computer vision models, not to mention the costs associated with collecting annotations suitable for training these object detectors, like on DUSIA where annotations are partial and limited. To aid in the challenges associated with training with less supervision, Context Matched Collages leverage explicit context labels to combine unused background examples with existing annotated data to synthesize additional training samples that ultimately improve object detection performance. By combining a set of our generated collage images with the original training set, we see improved performance using three different object detectors on DUSIA, ultimately achieving state of the art object detection performance on the dataset.
Habitat Recognition in Underwater Vehicle Video
As a first step toward creating the Context-Driven Detector, I first implemented a method using a convolutional neural network (CNN) capable of generating temporal labels for DUSIA. A ResNet-based classification models classifies the video frame by frame, and a median filter temporally smoothes the classification results.
Semantic Segmentation of Underwater Images of Sessile Organisms
The Marine Science Institute at UCSB regularly photograph the sea floor near the Channel Islands and annotate their data using point labels, indicating the species present in a small group of pixels. I adapted a weakly supervised segmentation network designed for natural images to work with their data. A first draft of this work is available on arxiv at https://arxiv.org/abs/2007.05615.
BisQue
BisQue is an open source platform for storing, visualizing, organizing, and analyzing images in the cloud, largely maintained by UCSB’s Vision Research Lab. My responsibilities include software engineering tasks such as integrating Keycloak user authentication, developing machine learning modules for public use, assisting with Docker issues, and advising undergraduate research students who participate in the project.
Deep Superpixel Features
Superpixels refer to an over segmentation used to group pixels into homogenous regions based on some criteria. They have numerous applications, but describing them computationally is not as straightforward or powerful as it could be. I am working to adapt new convolutional neural networks (CNNs) that generate superpixels to simultaneously generate deep features for those superpixels.
Skills / Frameworks
Python
Pytorch
OpenCV
cuda
NumPy
SciPy
Scikit-Learn
Ubuntu / Linux
Git
Docker
Scikit-image
Pillow
TensorFlow
Java
C,C++
MATLAB
HTML
Javascript
tmux
vim
Graduate Courses
Digital Image Processing
Computer Imaging
Information Theory
Matrix Analysis
Machine Learning
Pattern Recognition
Computer Vision
Topics in Cybersecurity
Operating Systems