Daniel Shin
I am a Master student in Computer Science at Stanford University.
Before Stanford, I was an undergraduate researcher at UC Berkeley, where I was fortunate to be
advised by Professor Daniel Brown, Professor Anca Dragan, and Professor Sergey Levine.
Previously, I have interned as an Applied Scientist at Amazon working on transformers and
as a Machine Learning Research Intern at Sony AI working on multi-modal models.
Scholar /
Github /
LinkedIn /
Twitter
|
|
|
Optimizing Learning Across Multimodal Transfer Features for Modeling Olfactory Perception
Daniel Shin*,
Gao Pei*,
Priyadarshini Kumari,
Tarek Besold
In International Workshop on Multimodal Learning at SIGKDD 2023
PDF /
Slides
We introduce a novel multilabel and multimodal transfer learning technique for modeling olfactory perception.
Our approach aims to tackle the challenges of data scarcity and label skewness in the olfactory domain.
|
|
Benchmarks and Algorithms for Offline Preference-Based Reward Learning
Daniel Shin,
Anca Dragan,
Daniel Brown
Transactions on Machine Learning Research (TMLR)
arXiv /
website /
poster /
code
We study how an offline dataset of prior (possibly random) experience can be used to address challenges
that autonomous systems face when they endeavor to learn from, adapt to, and collaborate with humans.
First, we use the offline dataset to efficiently infer the human's reward function via pool-based active preference learning.
Second, given this learned reward function, we perform offline reinforcement learning to optimize a policy based on the inferred human intent.
|
|
Hybrid Imitative Planning with Geometric and Predictive Costs in Off-road Environments
Nitish Dashora*,
Daniel Shin*,
Dhruv Shah,
Henry Leopold,
David Fan,
Ali Agha-Mohammadi,
Nicholas Rhinehart,
Sergey Levine
ICRA, 2022
arXiv /
website /
poster
Geometric methods for solving open-world off-road
navigation tasks, by learning occupancy and metric maps, provide
good generalization but can be brittle in outdoor environments. Learning-based
methods can directly learn collision-free behavior from raw observations, but are difficult to integrate with standard geometry based pipelines. This creates an unfortunate conflict – either use
learning and lose out on well-understood geometric navigational
components, or do not use it, in favor of extensively handtuned geometry-based cost maps. In this work, we reject this
dichotomy by designing the learning and non-learning-based
components in a way such that they can be effectively combined
in a self-supervised manner.
|
|