Publications#
This page lists works published by the Label Sleuth core team on system components and related aspects:
System Description
Label Sleuth: From Unlabeled Text to a Classifier in a Few Hours (EMNLP 2022)
This is the main publication describing the system. If you use Label Sleuth in your work in any capacity (e.g., for labeling data, extending the system, running experiments, etc.), please cite this paper as follows:
@inproceedings{labelsleuth2022, title={{Label} {Sleuth}: From Unlabeled Text to a Classifier in a Few Hours}, author={Shnarch, Eyal and Halfon, Alon and Gera, Ariel and Danilevsky, Marina and Katsis, Yannis and Choshen, Leshem and Cooper, Martin Santillan and Epelboim, Dina and Zhang, Zheng and Wang, Dakuo and Yip, Lucy and Ein-Dor, Liat and Dankin, Lena and Shnayderman, Ilya and Aharonov, Ranit and Li, Yunyao and Liberman, Naftali and Slesarev, Philip Levin and Newton, Gwilym and Ofek-Koifman, Shila and Slonim, Noam and Katz, Yoav}, booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, publisher = "Association for Computational Linguistics", url={https://aclanthology.org/2022.emnlp-demos.16}, year={2022} }
Few-Shot classification, Active Learning & User Feedback
Active Learning for BERT: An Empirical Study (EMNLP 2020)
Facilitating Knowledge Sharing from Domain Experts to Data Scientists for Building NLP Models (IUI 2021)
Cluster & Tune: Boost Cold Start Performance in Text Classification (ACL 2022)
Zero-Shot Text Classification with Self-Training (EMNLP 2022)
Explainability of NLP Models
Unsupervised Expressive Rules Provide Explainability and Assist Human Experts Grasping New Domains (Findings of EMNLP 2020)
A Survey of the State of Explainable AI for Natural Language Processing (AACL 2020)
Explainability for Natural Language Processing (AACL 2020 Tutorial)
XNLP: A Living Survey for XAI Research in Natural Language Processing (Interactive Web-site) (IUI 2021)
Explainability for Natural Language Processing (KDD 2021 Tutorial)
Evaluation of Interactive ML Systems