Use AI freely, but know where the work is happening.

A practical guide for faculty on privacy, model choice, context, validation, and assessment.

Core decisions from the meeting.

Privacy is not a footnote. It is the first branch in the decision.

Routine drafting, planning, coding, and non-sensitive analysis can usually use cloud tools. Medical, psychological, financial, legal, student, or personally identifying material should not be pasted into them.

Routine material: cloud tools are usually acceptable.
Sensitive material: keep it local or on university infrastructure.
Unclear material: treat it as sensitive until you know otherwise.

Choose by task: fast answer, careful reasoning, coding, video, or local privacy.

The meeting treated models as tools with different strengths. Fast modes are good for low-stakes drafting. thinking modes are better for comparison, critique, and planning. Video-native tools help with presentations.

Instant: quick drafting and simple summaries.
Thinking: analysis, comparison, planning, and synthesis.
Local: sensitive documents that should stay off external servers.

A good prompt is a workflow, not a magic sentence.

Context, task, output format, and validation instructions all matter. The model should know who you are helping, what material it can use, and what a useful answer should look like.

Context: role, audience, sources, limits, and examples.
Task: compare, critique, extract, draft, grade, or build.
Output: length, structure, language, citation style, and evidence.

Do not pretend AI is absent. Design proof that understanding is present.

A short video or oral explanation can show whether students understand the material. AI may help prepare, but the final work must still reveal judgment, evidence, and ownership.

Research: AI can support exploration and practice.
Evidence: students distinguish verified claims from suggestions.
Presentation: students explain the subject in their own words.

Keep sensitive work in the right places.

The practical rule from the meeting is simple: when the information could harm a person, a student, a patient, a project, or the university if exposed, do not upload it casually.

Cloud models

Use them for routine academic work, planning, drafting, coding, public documents, and general learning tasks.

Local models

Use them when the data should stay on your computer. Open models can be useful for private drafts and sensitive source material.

University infrastructure

For larger sensitive needs, Bar-Ilan on-premise options may be the right direction through IT or DSAI.

Find the right model for the work in front of you.

This is a small decision studio. Change the choices and the recommendation shifts. It makes the meeting advice usable without another long explanation.

Task
Data
Priority
Output
Recommended approach: local or on-premise reasoning model. Sensitive data should stay local. Use a careful model, provide only relevant context, and ask for evidence limits.

Validation should feel like research practice, not paranoia.

The meeting’s strongest practical message was that AI can help, but it must be checked. Use the examples to move between research, privacy, and teaching decisions without leaving the section.

Research documents with annotations
Research

Ask for source quotes and page details.

When a model uses documents, require direct evidence and mark what cannot be verified.

Laptop and archival documents on a desk
Privacy

Keep participant data out of the cloud.

Sensitive records belong in local or university-controlled environments.

Faculty members discussing AI use
Teaching

Use AI to prepare, then ask students to explain.

Video or oral presentation can show whether students actually understand the topic.

1 / 3

For the next session, bring one real task.

A document, assessment, course workflow, research task, or question is enough. The next meeting should be hands-on rather than another abstract tour.

Prepare a question
Screen 1 of 6