Skip to content
View Shravani018's full-sized avatar
🎯
🎯

Highlights

  • Pro

Block or report Shravani018

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Shravani018/README.md

Hi, I'm Shravani

Master's student in Applied Data Science at Frankfurt School of Finance & Management. With one year of experience as a Data Scientist in fintech, I have worked across automation, analytics, and machine learning. My work involved building automation pipelines and executive dashboards, as well as developing ML-driven solutions such as OCR-based KYC workflows, market sentiment analysis, and customer segmentation models.

🔍 Current Focus

Researching explainable AI and model interpretability frameworks, with an emphasis on fairness-aware approaches for building transparent ML systems in regulated financial environments.

🖥️ Tech Stack

Programming & Data

Python R SQL

Analytics & Business Intelligence

Power BI Tableau Streamlit

Data Engineering & Automation

Apache Spark Apache Hadoop Databricks Selenium n8n

Cloud & Development Tools

AWS Docker

📫 Let's Connect

Email

Open to collaborations, discussions on Explainable ML & data science, or just a friendly chat about AI ethics!

Pinned Loading

  1. llm-audit-bench llm-audit-bench Public

    A modular pipeline that audits 5 small HuggingFace LLMs across transparency, fairness, robustness, explainability, and privacy. Produces per-pillar scores across the 5 pillars of AI trustworthiness.

    Jupyter Notebook

  2. llm-stress-tester llm-stress-tester Public

    A framework to evaluate how open-source language models handle adversarial prompts. Tests across hallucination traps and jailbreak scenarios.

    Jupyter Notebook

  3. interpreting-transformer-hallucinations interpreting-transformer-hallucinations Public

    Mechanistic interpretability of transformer hallucinations via attention flow, residual stream geometry, and head-level attribution analysis.

    HTML

  4. rag-from-scratch rag-from-scratch Public

    A minimal, fully local Retrieval-Augmented Generation pipeline built from scratch. Covers document chunking, embedding with sentence-transformers, vector storage via ChromaDB, and LLM generation th…

    Python

  5. xai-credit-risk-analysis xai-credit-risk-analysis Public

    A credit risk prediction project that compares multiple machine learning models and applies explainable AI methods to interpret and analyze model decisions.

    Jupyter Notebook

  6. deepfractals deepfractals Public

    Visualizing deep learning dynamics as fractals in the complex plane

    Python