발행물
컨퍼런스
Seminar | Data Science Lab
2022
,
Training Generative Adversarial Networks in One Stage
Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little
Dice Loss for Data-imbalanced NLP Tasks
How Do Vision Transformers Work?
Denoising Diffusion Probabilistic Model