Implementation in Johnson County

Overview of the external evaluation

Overview of the external evaluation

During the first year of the project, Vera contracted with an external evaluator to do a process evaluation of the SARTCP. The evaluator and her team of Kansas-based researchers began by observing meetings between Vera staff, facility administrators, and representatives from outside agencies that would be involved in sexual assault response. They also conducted baseline interviews with the administrators and key members of the community SART, including staff from the rape crisis center and the SAFE program coordinator. They continued to observe meetings throughout the project and conducted individual interviews with key personnel a second time, shortly before the project ended. The initial interviews revealed some fault lines in the early collaboration between the DOC and community partners. The researchers attributed these to misunderstandings at the Phase 2 stakeholders meeting, issues that were cleared up over time.

The project had four major data-collection efforts: in-depth semi-structured interviews with residents of the two facilities; staff surveys; training evaluations; and an analysis of the facilities’ critical incident reviews. The following section summarizes the process of these efforts and, out of respect to the DOC, only very broad findings. Detailed findings were shared confidentially with DOC administrators to assist them with future planning related to PREA.

resident interviews

The project’s resident interviews focused on the climate in the facility; whether residents were aware of sexual violations of various types (from verbal harassment to rape) committed by residents against other residents or by staff against residents; the likelihood of disclosure of sexual assault; preferred staff members for making potential disclosures; the anticipated response to a disclosure; and types of support services the person would want. The interviewers did not ask about the individual’s personal experience of sexual assault. Because of the sensitive nature of the information such interviews may elicit, they should be conducted only by outside professional researchers or evaluators who have experience interviewing survivors of sexual assault and whose work is subject to oversight by an ethics review committee.

The evaluators provided informed-consent forms to the adult participants and to the parents or guardians of juvenile participants. The evaluators explained what informed consent entails, assured residents that there would be no repercussions of participating or not participating, and asked them to sign. Because the JDC interviews required parental consent and assent of the youth, a convenience sample was a necessity.[1] Evaluators and DOC staff arranged for a counselor to be available in case the interview elicited traumatic memories.

Overall, the interviews did not reveal a sexual assault problem at ARC or JDC. They did uncover some concerns about sexual joking and verbal harassment among residents and by staff. Residents also expressed some wariness about how staff might respond to disclosures of sexual assault and had low expectations of confidentiality in the event of a disclosure—a concern that diminished by the time follow-up interviews were completed. A number of the residents indicated that they would report sexual assault to a staff member, and most said they felt safe at the facilities. Some residents recommended that information about sexual assault and reporting be presented a day or two later, rather than during intake, which is when they typically receive the information, in accordance with the PREA standards. They said that intake can be an overwhelming time and thought they might be able to process the information better afterward.

[1] A “convenience sample” uses the most available subjects, typically volunteers. This type of sample runs the risk of not representing the whole population if one group is more accessible or more likely to volunteer than another group is. For example, younger residents might be warier than older residents; residents participating in more activities might be less available than others who have more unstructured time.

the staff survey

An online staff survey was conducted once before the first major training and again nine months later. The anonymous survey focused on knowledge of PREA and services for victims; beliefs about sexual assault perpetrated by residents and sexual misconduct by staff; beliefs about obstacles to disclosure; and beliefs about the frequency of false allegations. The survey was lengthy, and less than 50 percent of staff from both facilities responded to the baseline survey. The follow-up had a better response rate, with 57 percent from the ARC and 77 percent from the JDC completing the survey. Facilities that undertake a staff survey might get higher response rates by using a shorter survey.

The surveys revealed three main needs that were addressed in subsequent trainings:

  • Many staff members were not familiar with PREA. This was remedied by the time of the follow-up survey, after staff had attended the consultant-led training and the new sexual assault response policy had been rolled out.
  • Some staff members were misinformed about how administrators handle reports of sexual assault and the scope of information they can legally and ethically share with staff about these reports. The subsequent shift-supervisor training and PREA trainings for line staff addressed these issues.
  • Some staff members were confused about when to report sexual assault—and whether physical injury or other criteria were necessary for reporting or whether verbal sexual harassment would be enough to trigger the reporting requirements under the department’s new policy. Many staff also believed that residents would make false reports to gain some advantage or revenge. Shift-supervisor training and PREA trainings for line staff also addressed these issues.
training evaluations

Trainings were evaluated in two ways: with a participant feedback survey (see Appendix 4 for the questionnaire that was used) and through the evaluators’ observations. The survey administered at the end of the training sessions asked the trainees to rate the utility of the training and their satisfaction with various aspects of it and included open-ended questions about the most- and least-useful parts of the training and recommendations for future training. The evaluators’ observations were very informative about how the training was received, gaps in the training, and possible improvements.

An expert on sexual assault and the implementation of PREA in prisons conducted the first training, which covered sexual assault in facilities, PREA, sexual trauma, medical and psychological responses to sexual assault, and vulnerable populations. Although the evaluation team observed a few staff members being disruptive or not paying attention during class, staff nevertheless rated this training positively: Staff were very satisfied with the trainer, the pace, and the applicability to their jobs. They were frustrated that most of the research has focused on prisons and not residential programs. Wanting more information and material that was specific to their facility, they found the flowcharts showing the first-response steps most useful. They suggested printing the charts in color and making them accessible to staff electronically. Their suggestions for improving the training centered on allowing more opportunities for interaction and role plays, as well as incorporating testimonials, case studies, video clips, or some combination, to generate more interest and discussion. They also recommended more discussion of the practical application of the material to their daily job functions.

The directors of the two facilities conducted the second training, which was designed to train supervisors to train and coach the front-line staff. It was a shorter and more focused training than the first, covering PREA standards, the facilities’ newly developed policies, the first-response protocol, and PREA audits. Materials provided included the slides the trainers used, a laminated card with a brief version of the first-response protocol, a flowchart depicting steps of the response for each facility, and MOCSA brochures. The training consisted of a presentation by the directors, group discussions, and role plays.

The supervisor trainees asked many questions for clarification, including questions about confidentiality and privacy, victims’ options in declining services, how much a first responder should ask (given that it is not the first responder’s role to investigate the complaint), and how to deal with disclosure of a past assault that may have been perpetrated in the community, at another facility, or at home.

The feedback from supervising staff on the training was extremely positive. They considered the training very useful. They thought the role plays were especially instructive, that they generated thoughtful discussion, and that the flowcharts were valuable. Additional needs the supervisors cited were more training and better understanding of the PREA requirements and their integration into DOC policy.

In addition to these two trainings, supervisors at both DOC facilities delivered PREA training to line staff, which the evaluation team observed. Supervisors continue to conduct trainings for new hires and provide annual refresher training to all staff.

The PREA trainings for line staff last approximately 90 minutes and consist of a presentation on PREA and sexual assault response, including the role of first responders, the composition of the local SART, and the ways that victims can report abuse. Following the presentation, participants work in groups to role-play various scenarios. The evaluators have noted that participants have been engaged and attentive during these trainings and that presenters have learned over time how to model empathy for victims.

critical incident reviews

In the final year of the project, following the development and implementation of sexual assault response policies and procedures at the two DOC facilities, the evaluators conducted a review of critical incident reports at the JDC and ARC. Reports at the JDC spanned 12 months and reports from the ARC covered 18 months. Reports did not necessarily detail incidents of sexual abuse; rather, they described the facility’s response to every allegation or complaint of a sexual nature. The goals of the evaluators’ review were to determine how closely the facilities were adhering to their policies and procedures and help them identify any areas for improvement or revision.

The facilities completed the critical incident reviews and issued reports in different ways. At one facility, a single person conducted the investigations and wrote the critical incident reports; at the other, one person investigated and then convened a committee to review the cases. Both approaches were effective, but using a committee to review investigations seemed to result in closer compliance with policy and procedure and ensure the involvement of administrators in the investigations and reviews.

Overall, the evaluators concluded that responders had taken actions that adhered to stated policies and procedures. But they found a few issues at both facilities that required some clarification or consideration for improvement. In some cases, those issues required a simple note of clarification in a flowchart or policy. In others, like reducing the time lag between an investigation and a review, facility administrators needed to consider modifying a procedure to improve effectiveness and efficiency.

Keep Reading