On this page
AI Interpretations takes your assessments to the next level by enabling automated analysis-based scoring and personalized feedback that's connected to the context of each unique response. When used well, it scales enterprise-grade analysis, helps to reduce assessor inconsistency, and can identify new ways at looking at feedback that you have never considered before.
AI Interpretations can be prompted to use any answer types, ratings, tables, respondent details or scores to build feedback. It can even use uploaded documents, spreadsheets, Power Point presentations or PDFs as context.
You can use AI Interpretations to review policies and processes, determine whether a business qualifies for a certification, summarize an auditor's on-site notes and so much more. Feedback and scoring generated with AI Interpretations can then be pulled into all feedback formats, such as a results page, a downloadable PDF, an editable Word report, a cohort report or a 360 analysis.
AI Interpretations can be used for an individual response as well as for group of responses.
An Individual AI Interpretation will generate results taking into account the data from a single response, whereas a Cohort AI Interpretation will generate results taking into account the data from all responses within a cohort.
Individual AI Interpretations must be added in a 'Normal' Section Type or 'Individual Report' Section Type whereas Cohort AI Interpretations must be added in a 'Cohort Report' Section Type.
1. Adding a new AI Interpretation
1a. Adding an Individual AI Interpretation
1b. Adding a Cohort AI Interpretation
1a. Adding an Individual AI Interpretation
If you have one or several individual AI interpretations that refer to several subsections and/or sections of the assessment:
- Go to the Assessment Builder.
- Add a new Section (Section must be added and saved before moving onto point 3).
- Edit Section --> Options --> Select Section Type = Individual Report --> Save (this will be hidden to respondents).
- Add a new Subsection.
- Click 'Add AI Interpretation'.
If you have an AI interpretation that refers to a particular subsection of the assessment
-
Add AI Interpretation directly below the subsection it refers to.
1b. Adding a Cohort AI Interpretation
- Go to the Assessment Builder.
- Add a new Section (Section must be added and Saved before moving onto point 3).
- Edit Section --> Options --> Select Section Type = Cohort Report --> Save (this will be hidden to respondents).
- Add a new Subsection.
- Click 'Add AI Interpretation'.
2. AI Interpretation Window
The AI Interpretation window will open. It includes 4 main areas:
2a. The AI Interpretation Main Settings
2d. The AI Interpretation Options tabs
2a. The AI Interpretation Main Settings
- Order. This is the unique number of the AI Interpretation which is ordered like a question number.
- Name. The name of the interpretation.
- Max Points. This is used if the interpretation result is a score. The score will be divided by the Max points in order to get a percentage score.
2b. The AI Prompt
- Text box. Add the prompts. These will usually use a mixture of free text and merge strings. refer to: Writing Effective AI Interpretation Prompts
- Add Merge String. This will open a new window to access to the merge manager and available merge strings to be used for AI Interpretations.
Merge Types available:
- Charts (tables only)
- Individual Answers
- Ratings based Text
- Response Settings
- Scores
- Response. Select a Response to run a test on.
- Test Interpretation. Where to test the interpretation in the Builder. The Interpretation results will appear below.
- Clear. Will delete the AI Prompt.
2c. The AI Score Prompt
-
Calc after scoring. Select this checkbox if (and only if):
- There is a score of any type as input to this calculation or
- There is an answer as input that is output by another calculation that has a score as input.
- Result is numeric. If the result should all be converted into a numeric value, select this checkbox.
-
Number Format. Select from General (no formatting), Thousand separated (with commas or as appropriate for the culture code of the Language of the assessment set in the assessment settings or en-US by default), Currency (again using the language), or percentage.
Note: If the output as a numeric in a subsequent calculation or validation, you must output it in the general format.
- Decimal Places. This field is only used for numeric fields. Specify the number of decimals will be displayed. The result of the calculation will be rounded to this number of decimals.
- Text box. Write the prompts. The prompt must be asking to return a number/score and not a text.
- Add Merge String. This will open a new window with access to the merge manager and available merge strings to be used for the score prompt.
- Test Interpretation. Test the interpretation in the Builder for the score prompt.
- Clear. Will delete the Score Prompt.
2d. The AI Interpretation Options tabs
- Documentation unlimited documentation of Interpretation can be entered. They do not affect the Interpretation or appear anywhere but here.
- Conditions Interpretation can be conditional like a question. The system will treat it as unanswered, and no Interpretation will be done.
- Options Interpretation can be linked to segmentations or a classifier in the same way as questions. Segmentations are required to report scores.
19a. This is the name of the Interpretation.
19b. Choose what response setting to link to.
19c. Select a Segmentation if required.
19d. Select a Classifier if required.
3. Managing AI Interpretations
3a. Managing AI Interpretations at Response Level
3b. Managing AI Interpretations at Cohort Level
Before an individual or cohort report/results dashboard is sent, Managers and Admins will be able to review and adjust results of their AI Interpretations.
3a. Managing AI Interpretations at Response Level
If the assessment is not using the Assessor Functionality, the result(s) of the individual AI interpretation(s) will not be able to be reviewed and adjusted before the report/results dashboard is generated and sent.
To review and potentially adjust the report/results dashboard of the individual AI interpretation(s), enable the Assessor functionality. For more details about this Assessor functionality, please review this article: Assessor Functionality – Brilliant Assessments.
Once the assessor functionality is enabled and the response has the Status 'Answered', the assessor will be able to review the AI interpretation result(s):
From the Response List --> click 'Action' --> View Interpretations.
It will open the Response Interpretations screen to Edit and Lock any results.
Click on Edit:
- Interpretation Results. Review and adjust as necessary.
- Locked. Tick this box when satisfied with the result.
- Rerun. AI will generate a new result.
- Close.
- Save.
Once all results have been reviewed and locked, click 'Save and Complete'.
This will update the response status to Completed and the generation of the individual report/results dashboard.
3b. Managing AI Interpretations at Cohort Level
If using Cohort AI Interpretations, the results of these interpretations will be available for review and adjustment from the tab 'Interpretations'.
Click on 'Edit' to review, adjust (if necessary) and and save lock the result. This review and adjustments should take place before the cohort requested completion date.
The locked result will be generated in the cohort report/result dashboard.
AI Interpretation can automate your feedback and reimagine what's possible in your assessments. So give it a try today!
4. Using AI Interpretations in Reports
Once your AI Interpretations are set up and generating results, you can include them in your feedback reports, cohort reports, or results dashboards using merge strings.
4a. Using AI Interpretation Text in Reports
4b. Using AI Interpretation Scores in Reports
4a. Using AI Interpretation Text in Reports
To include the AI-generated text result in your report, use the Answer Text merge string for the specific AI Interpretation question:
Merge String: {ResponseAnswer.AnswerText[Sn SSn Qnn]}
Where:
- Sn = S followed by the Section number (e.g., S5 for Section 5)
- SSn = SS followed by the Subsection number (e.g., SS1 for Subsection 1)
- Qnn = Q followed by the Question number of the AI Interpretation (e.g., Q10 for the first AI Interpretation in that subsection)
Example: If your AI Interpretation is in Section 5, Subsection 1, and is the first question (Q10), the merge string would be: {ResponseAnswer.AnswerText[S5 SS1 Q10]}
Note:
- You can find the Section, Subsection, and Question numbers in the Assessment Builder.
- The AI Interpretation is treated like a question, so its Order number is its Question number.
4b. Using AI Interpretation Scores in Reports
To use an AI-generated score in your report, the best practice is to link the AI Interpretation to a Segmentation. You can then reference the score of that segmentation in your report using the standard score merge strings.
- Link to Segmentation. In the AI Interpretation Options tab, select the appropriate Segmentation to link to your AI Score. (Read more about Segmentations)
- Reference in Report. Use the Segmentation score merge strings in your report template to display the AI-generated score.
Alternative options: AI Interpretation scores can also be used as:
- Subsection Score – The AI score contributes to the overall subsection score.
- Section Score – The AI score contributes to the overall section score.
FAQs & Troubleshooting
Use the questions below to find answers to common questions about AI Interpretations.
Comments
0 comments
Please sign in to leave a comment.