0% found this document useful (0 votes)
18 views5 pages

Background and Specification Progress Report

Uploaded by

Ayla Gatland
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views5 pages

Background and Specification Progress Report

Uploaded by

Ayla Gatland
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

## Background and Specification Progress Report (BSPR) Draft

### 1. **Background and Context for the Project**

Generative AI systems, such as large language models and image


generation tools, have surged in use and development, offering
unprecedented applications across fields like healthcare, finance, and
creative industries. However, these technologies raise critical concerns
regarding privacy, security, and user trust. This project investigates how
privacy and safety concerns impact trust in generative AI systems across
diverse populations, with the aim to design and develop an AI
transparency dashboard. This tool will provide users with insights into the
inner workings of generative AI systems, focusing on privacy safeguards
and transparency of AI model operations.

Emerging literature on trust, privacy, and safety in AI has underscored the


need for transparency mechanisms. Despite the recognition of these
issues, limited research has been conducted on how transparency affects
trust across different demographic groups, and few tools exist to evaluate
users' perceptions of privacy within AI applications in real-world contexts.

### 2. **Review of Relevant Literature**

**2.1 Privacy, Security, and Trust Frameworks in AI**

- *AI Trust Framework and Maturity Model* (2024): This study provides a
foundational understanding of trust in AI, highlighting the components of
transparency, explainability, and accountability in AI systems. These
factors are crucial to fostering user trust and serve as a guiding
framework for this project.

- *Evaluating Privacy, Security, and Trust Perceptions in Conversational


AI* (2024): This research reviews user perceptions of privacy and trust in
conversational AI, identifying gaps in transparency and accountability that
undermine user trust.

- *Data Privacy in an AI-Driven World: Balancing Innovation and


Security* (2024): This report discusses the challenges of balancing user
privacy with AI innovation, emphasizing the necessity of privacy-by-design
principles. This concept will inform the dashboard design by incorporating
transparency features to support privacy while minimizing data risk.

**2.2 Identified Research Gaps**

- Lack of empirical data on AI trust perceptions across varied


demographics, limiting understanding of cross-cultural and socio-
economic factors in AI trust.

- Absence of interactive, user-facing tools that enable real-time


exploration of privacy features in generative AI models.

### 3. **Requirements**

**3.1 Functional Requirements**

- **Transparency Dashboard**: Display core AI model functions, data


handling processes, and model decision-making transparency.

- **Privacy Features**: Offer user-customizable privacy options with


explanations of data handling practices.

- **User Feedback Collection**: Collect user perceptions of privacy and


trust to inform continuous improvements.

**3.2 Non-Functional Requirements**

- **Usability**: The dashboard should be user-friendly and accessible to


individuals with varying technical backgrounds.

- **Data Privacy Compliance**: Comply with relevant privacy standards


(e.g., GDPR) and maintain minimal data collection for user testing.

- **Scalability**: The dashboard should accommodate future feature


expansions, such as AI model comparisons and more detailed model
metrics.

### 4. **Specification**
The AI transparency dashboard will be developed primarily using Python
for backend processing, React for the frontend interface, and TensorFlow
or PyTorch for any required machine learning components. The core
components of the specification include:

- **Backend Processing**: Utilize Python and relevant libraries (e.g.,


TensorFlow, Scikit-Learn) for processing transparency-related data,
managing user interactions, and interfacing with the AI model.

- **Frontend Interface**: React will be used to create a dynamic and


interactive user interface. The dashboard will display information on AI
processes, transparency features, and privacy options.

- **Data Management and Security**: Secure data handling will be a


priority, ensuring that the dashboard only stores minimal user data, in
compliance with data privacy regulations.

### 5. **Design**

**5.1 Dashboard Design**

- **Architecture**: The dashboard will be a web-based tool accessible


through standard browsers, using React for the frontend and Flask or
Django for backend API services.

- **Interface Components**:

- **Transparency Panel**: Visualizations and summaries of AI model


operations.

- **Privacy Control**: Customizable privacy settings with explanatory


tooltips.

- **Feedback Mechanism**: Interface elements for user feedback


submission to assess privacy and trust perceptions.

**5.2 Technology Stack**

- **Programming Languages**: Python for backend processes, JavaScript


(React) for frontend.

- **Libraries/Frameworks**: TensorFlow/PyTorch for AI components,


Flask/Django for API management, and Scikit-Learn for data analysis.
- **Development Tools**: Jupyter Notebooks for data prototyping, GitHub
for version control, and cross-platform testing environments (Linux,
Windows, macOS).

**5.3 Ethical Considerations**

- The project involves user testing to gauge perceptions of trust and


privacy, which requires ethical clearance. The REMAS form has been
submitted, and privacy measures will be implemented to ensure
compliance with data protection standards.

---

### Methodology Diagram

Here is a high-level outline for the methodology, which can be expanded


into a diagram:

1. **Data Collection & Literature Review**

- Initial literature review on privacy, trust, and AI transparency.

- Formulate research questions and gather demographic data for target


user groups.

2. **Prototype Development**

- **Backend Setup**: Python, TensorFlow/PyTorch, and Flask/Django for


AI processing and API integration.

- **Frontend Setup**: React for UI, focusing on a transparency panel and


privacy settings.

3. **Testing and Feedback Collection**

- Recruit 10-15 diverse participants for qualitative feedback.

- Conduct usability testing and gather insights on user trust and privacy
concerns.
4. **Iterative Refinement**

- Analyze feedback to refine dashboard components.

- Update privacy options and transparency explanations based on user


input.

---

This structure provides the foundation for your BSPR, establishing a clear
project context, detailed requirements, and a transparent methodology.
Let me know if you’d like more details or adjustments to any of these
sections!

You might also like