smartsence – siddhi – DEV Community


Person Role
Siddhi Nalawade Web App & System Integration
Shravani Sarvi IoT and ESP8266 Control
Bhakti Potdar ML & OpenCV Detection System
Srushti Waydande Electronics Hardware & Energy Automation



🎤 Smart Classroom Automation using IoT and Computer Vision

Full 10-Slide Presentation Script (with role-wise flow)




Slide 1 – Title Slide

Siddhi:
“Good morning everyone. We are here to present our final-year project titled Smart Classroom Automation using IoT and Computer Vision.
Our team members are — myself, Siddhi Nalawade, Shravani Sarvi, Bhakti Potdar, and Srushti Waydande.
Our guide for this project is [faculty name].
Let’s walk you through how we combined AI, IoT, and automation to make classrooms truly intelligent.”




Slide 2 – Introduction

Shravani:
“Classrooms often have lights and fans left on even when no one is present.
Our system aims to solve this by detecting human presence using a camera and then automatically switching off electrical devices when the room is empty.
We also provide a custom web application so users can manually control devices from anywhere.
It’s an energy-efficient and user-friendly system built for smart institutions.”




Slide 3 – Problem Statement

Bhakti:
“Most institutions rely on manual switching, which leads to wastage of energy and higher electricity bills.
Even motion sensors often fail to detect stationary humans, so they’re not reliable in classrooms.
We wanted a system that is accurate, automatic, and intelligent — that’s why we used computer vision instead of basic sensors.”




Slide 4 – Objective / Aim

Srushti:
“Our main objective is to create an automated classroom environment that can:

  1. Detect human presence through a camera feed.
  2. Control fans and lights through IoT.
  3. Offer a manual control option through a web or mobile app.
    This ensures both energy saving and user flexibility.”



Slide 5 – Need / Significance

Siddhi:
“This project supports the idea of a sustainable campus.
By automating device control, we minimize energy waste and reduce human effort.
It also provides real-time monitoring and data collection that can later help the management analyze energy usage patterns across classrooms.”




Slide 6 – System Architecture

Shravani:
“Here’s how our system works end-to-end.
We use a camera connected to a processing unit that runs an OpenCV-based human detection model.
When no person is detected, the backend sends a signal to the ESP8266 microcontroller through Wi-Fi.
The ESP then switches off the connected appliances via relay modules.
Communication between the app, backend, and hardware happens over standard HTTP APIs.”

Bhakti (adds):
“The ML model continuously checks frames from the live camera feed. It’s light-weight enough to run locally, so it doesn’t need a powerful GPU.”




Slide 7 – Use Case Scenario

Bhakti:
“Let’s consider a real classroom.
When students or teachers are present, our camera detects them — all lights and fans remain ON.
When the classroom becomes empty for a few minutes, the system confirms no human detection and sends a command to turn OFF all appliances.
If a teacher wants to use the room for extra time, they can log into the web app and manually turn devices ON.”

Siddhi:
“The app also provides an admin login and shows current device status, so users always have control.”




Slide 8 – Advantages

Srushti:
“Our system provides multiple benefits:

  • It saves significant energy.
  • Reduces manual intervention.
  • Provides a live control interface.
  • Is scalable for other rooms and buildings.
  • Works with affordable components like ESP8266 and relays.”

Shravani:
“And the best part — it’s wireless and Internet-enabled, so it can be managed remotely.”




Slide 9 – Market Scope & Future Enhancements

Siddhi:
“The system can easily be scaled to a Smart Campus Solution and even used in offices or hostels.
We’re also planning to extend it with features like:

  • Face recognition for attendance marking.
  • Voice assistant integration for control.
  • Cloud data analytics to predict usage patterns.
  • Integration with solar power for sustainable automation.”



Slide 10 – Conclusion

All (in turns):

Bhakti:
“To conclude, our project shows how machine learning can make real-world spaces intelligent.”

Shravani:
“We connected IoT with AI to create a system that conserves energy automatically.”

Siddhi:
“Our web interface bridges the gap between users and hardware, making control simple and reliable.”

Srushti:
“This project demonstrates how electronics, software, and AI can work together for sustainability and smart living.”

All together:
“Thank you for your time!”




💡 Presentation Flow Summary (for rehearsal)

Slide Speaker Focus
1 Siddhi Intro & team intro
2 Shravani Problem context
3 Bhakti Problem details & ML motivation
4 Srushti Objective
5 Siddhi Significance
6 Shravani + Bhakti Architecture
7 Bhakti + Siddhi Use case
8 Srushti + Shravani Advantages
9 Siddhi Market & Future
10 All Conclusion




A Human Detection–Based Energy Optimization System




1. Concept Overview

This project is an intelligent automation system that monitors classrooms in real time using computer vision and controls electrical appliances (lights, fans, etc.) through IoT hardware (ESP8266).
When no human is detected for a defined time, the system automatically switches off all connected devices.

It also includes a custom-built web app where authorized users can:

  • Log in and control appliances manually,
  • Override automatic control if needed,
  • Monitor live status and logs,
  • Access the system remotely through the Internet.

Thus, the system integrates three major technologies:

  1. Computer Vision (OpenCV/ML) for intelligent detection.
  2. IoT (ESP8266 + Relays) for real-world device control.
  3. Web Application for centralized human interaction and monitoring.



2. Problem It Solves

Traditional classrooms rely on human action to turn off fans and lights. In practice, they’re often left ON after classes — resulting in:

  • Energy wastage (up to 30–40% daily in some institutions).
  • Higher maintenance and electricity costs.
  • Lack of accountability for energy use.

Motion sensors exist, but they only detect movement — not presence. So, if a teacher stands still or writes on the board quietly, sensors may falsely switch lights off.

Our system, in contrast, uses camera-based ML, which sees humans — even when stationary — and makes intelligent decisions based on actual presence, not motion.




3. System Architecture

Imagine the project as a chain of five interconnected modules:



a) Camera Module

  • Mounted in the classroom to capture real-time video feed.
  • Connected to a local processing device (PC, Raspberry Pi, or mini server).
  • Provides continuous visual input to the ML model.



b) Computer Vision & ML Processing

This module acts as the project’s “intelligent brain.”



c) Backend / Control Logic (Server)

  • Implemented using Flask or FastAPI.
  • Receives data from the ML module every few seconds.
  • Communicates with the ESP8266 via HTTP or MQTT protocol.
  • Maintains user authentication, logging, and state control (ON/OFF).
  • Stores data in a small database (SQLite or PostgreSQL).

It decides what command to send based on two inputs:

  1. ML detection status.
  2. Manual override flag from the web app.



d) IoT Hardware Module (ESP8266 + Relays)



e) Web Application / Dashboard




4. Detailed Working Process (Step-by-Step)

Let’s trace what happens during a normal classroom day:



Step 1: System Initialization

  • The camera and ML module start automatically when the device boots.
  • The ESP8266 connects to Wi-Fi and registers itself with the backend (via a unique device ID).
  • The web server starts listening for incoming API requests.



Step 2: Human Detection

  • The camera captures continuous video frames.
  • Each frame is passed through the ML model.
  • The model detects bounding boxes around humans.
  • If humans are found, a counter resets (indicating presence).
  • If no human is found for a set duration (say, 2 minutes), a “No Human” state is triggered.



Step 3: Decision Logic

  • The backend receives the detection result.
  • It checks:

    • Whether the user has manually disabled automation (manual mode).
    • If not, the backend automatically generates a control signal:
    • Human detected → status = ON
    • No human detected → status = OFF



Step 4: Command Transmission

  • The control command (ON or OFF) is sent to the ESP8266 via an HTTP POST request or MQTT topic message.
  • Example API endpoint:
    POST /device/update-status
    Payload: { "device_id": "classroom_101", "fan": "off", "light": "off" }



Step 5: Hardware Control

  • The ESP8266 receives the command and triggers the GPIO pins connected to the relay board.
  • Relays physically disconnect or connect the circuit of fans/lights.
  • The change happens instantly (usually under 1 second).
  • The ESP then sends back a response like:
    { "status": "success", "timestamp": "2025-10-10 09:42:00" }



Step 6: User Interface (App)



Step 7: Safety and Redundancy

  • The system includes manual wall switches for emergency control.
  • In case of Internet disconnection, the last known command is maintained locally until reconnection.
  • The backend ensures command acknowledgment before marking an action as complete.



5. Hardware Components

Component Function Notes
ESP8266 (NodeMCU) Wi-Fi microcontroller Handles device switching
Relay Module (2-channel or 4-channel) Controls AC power to appliances Opto-isolated preferred
Camera (USB/Webcam) Captures live video feed Mounted at corner for wide view
Power Supply (5V DC) Powers ESP and relays Use regulated adapter
Lights & Fans Load devices Controlled appliances

Optional Add-ons:

  • DHT11 Sensor → for temperature/humidity display in app.
  • Buzzer → for alerts when system toggles devices.



6. Software Stack

Layer Technology Used Purpose
Frontend (Web App) React / HTML-CSS-JS / Bootstrap User dashboard, controls
Backend API Flask / FastAPI Device control, login, logic
ML/Detection OpenCV, TensorFlow Lite, or YOLOv5 Detect human presence
IoT Communication HTTP / MQTT Control and feedback
Database SQLite / PostgreSQL Logs and user accounts
Microcontroller Code Arduino C++ (for ESP8266) Relay control, Wi-Fi handling



7. Communication Flow Diagram (Conceptual)

Camera → ML Model → Flask API → ESP8266 → Relays → Devices

And in reverse:
ESP8266 → Flask API → Database → Web App Dashboard




8. Algorithm Outline (Simplified Pseudocode)

The ESP8266 code then simply listens for incoming HTTP requests and toggles the GPIO pins accordingly.




9. Real-World Performance & Benefits

Parameter Traditional Our System
Energy Wastage High Almost eliminated
User Effort Manual Automatic
Accuracy Motion-based Vision-based
Cost Moderate Low (DIY-friendly)
Scalability Limited Highly scalable
Control Local Remote via App

Estimated savings: Up to 30% reduction in electricity bills for a 10-room setup.




10. Future Enhancements

  • Face recognition → auto attendance tracking.
  • Voice assistants → Alexa or Google integration.
  • Cloud dashboard → energy analytics across buildings.
  • Solar power integration → fully green classrooms.
  • AI prediction → predict occupancy patterns based on schedule data.



11. Conclusion

This system merges software intelligence (AI) with hardware automation (IoT) to create a product that is sustainable, efficient, and scalable.
It transforms a simple classroom into a self-managing smart environment, aligning with modern smart campus goals.

The result is a complete ecosystem that:

  • Thinks (ML model),
  • Acts (ESP hardware),
  • Listens (App commands), and
  • Learns (data analytics).

This fusion of OpenCV, ESP8266, and Flask creates a bridge between the digital and physical world — an ideal demonstration of engineering synergy.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *