Researchers of advanced technologies are constantly seeking new ways of measuring and adapting to user performance. Appropriately adapting system feedback requires accurate assessments of user performance. Unfortunately, many assessment algorithms must be trained on and use pre-prepared data sets or corpora in order to provide a sufficiently accurate portrayal of user knowledge and behavior. However, if the targeted content of the tutoring system changes depending on the situation, the assessment algorithms must be sufficiently independent to apply to untrained content. Such is the case for iSTART, an intelligent tutoring system that assesses the cognitive complexity of strategy use while a reader self-explains a text. iSTART is designed so that teachers and researchers may add their own (new) texts into the system. The current paper explores student self-explanations from newly added texts (which iSTART hadnt been trained on) and focuses on evaluating the iSTART assessment algorithm by comparing it to human ratings of the students self-explanations.