Papers
(* indicates equal contribution; 𝛼 indicates alphabetical author list)
                Code-enabled language models can outperform reasoning models on diverse tasks 
                C.E. Zhang, C. Colas, G. Poesia, J.B. Tenenbaum, J. Andreas 
                arXiv preprint
            
                People use fast, flat goal-directed simulation to reason about novel problems 
                K.M. Collins*, C.E. Zhang*, L. Wong*, M. Barba da Costa*, G. Todd*, A. Weller, S.J. Cheyette, T.L. Griffiths, J.B. Tenenbaum
                arXiv preprint
            
                Evaluating language models' evaluations of games 
                K.M. Collins, C.E. Zhang, G. Todd, L. Ying, M. Barba da Costa, R. Liu, P. Sharma, A. Weller, I. Kuperwajs, L. Wong, J.B. Tenenbaum, T.L. Griffiths
                arXiv preprint
            
                On the same wavelength? Evaluating pragmatic reasoning in language models across broad concepts 
                L. Qiu*, C.E. Zhang*, J.B. Tenenbaum, Y. Kim, and R.P. Levy
                EMNLP 2025
            
                Language-informed synthesis of rational agent models for grounded Theory-of-Mind reasoning on-the-fly 
                L. Ying, R. Truong, K.M. Collins, C.E. Zhang, M. Wei, T. Brooke-Wilson, T. Zhi-Xuan, L. Wong, and J.B. Tenenbaum, 
                EMNLP 2025 Findings
            
                Modeling open-world cognition as on-demand synthesis of probabilistic models 
                L. Wong*, K.M. Collins*, L. Ying, C.E. Zhang, A. Weller, T. Gersternberg, T. O'Donnell, A.K. Lew, J.D. Andreas, J.B. Tenenbaum, and T. Brooke-Wilson
                CogSci 2025 (Talk)
            
                Scaling up the think-aloud method 
                D. Wurgaft*, B. Prystawski*, K. Gandhi, C.E. Zhang, J.B. Tenenbaum, and N.D. Goodman 
                CogSci 2025 (Talk)
            
                Building machines that learn and think with people 
                K.M. Collins*, I. Sucholutsky*, U. Bhatt*, K. Chandra*, L. Wong*, M. Lee, C.E. Zhang, T. Zhi-Xuan, M. Ho, V. Mansinghka, A. Weller, J.B. Tenenbaum, and T.L. Griffiths 
                Nature Human Behavior
            
                Conditional and modal reasoning in large language models 
                W.H. Holliday, M. Mandelkern, and C.E. Zhang𝛼 
                EMNLP 2024
            
                People use fast, goal-directed simulations to reason about novel games 
                C.E. Zhang*, K.M. Collins*, L. Wong*, A. Weller, and J.B. Tenenbaum 
                CogSci 2024 (Talk)
            
                AI for mathematics: A cognitive science perspective 
                C.E. Zhang*, K.M. Collins*, A. Weller, and J.B. Tenenbaum 
                MATH-AI Workshop at NeurIPS 2023
            
                LINC: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers 
                C.E. Zhang*, T.X. Olausson*, A. Gu*, B. Lipkin*, A. Solar-Lezama, J.B. Tenenbaum, and R. Levy
                EMNLP 2023 (Outstanding paper award)
            
                The neuro-symbolic inverse planning engine (NIPE): modeling probabilistic social inferences from linguistic inputs 
                L. Ying, K.M. Collins, M. Wei, C.E. Zhang, T. Zhi-Xuan, A. Weller, J.B. Tenenbaum, and L. Wong 
                ToM Workshop at ICML 2023
            
                Towards a model of confidence judgements in concept learning 
                T.E. Mills*, T. Chen*, C.E. Zhang*, and J.B. Tenenbaum 
                CogSci 2023
            
                Grounded physical language understanding with probabilistic programs and simulated worlds 
                C.E. Zhang, L. Wong, G. Grand, and J.B. Tenenbaum 
                CogSci 2023
            
                Does Amy know Ben knows you know your cards? A computational model of higher-order epistemic reasoning 
                C. Zhang*, H. Ham*, and W.H. Holliday 
                CogSci 2021
            
                A model of temporal connective acquisition 
                M. Gorenstein*, C. Zhang*, and S.T. Piantadosi 
                CogSci 2020 (Talk)
            
                When do introspection axioms matter for multi-agent epistemic reasoning? 
                Y. Ding, W.H. Holliday, and C. Zhang𝛼 
                TARK 2019