Spaces:
Running
Running
Update app.py
Browse files
app.py
CHANGED
@@ -59,7 +59,7 @@ SARATH CHANDRA BANDREDDI
|
|
59 |
- Face Recognition Attendance System: Created a one short face recognition model using FaceNet and MTCNN to manage attendance, with a unique feature allowing students to mark their attendance only once per day and within the campus premises. This innovation ensures strict attendance integrity and security.
|
60 |
- Chatbot Integration: Built and integrated the AskVVIT chatbot to assist with college-related inquiries. Initially deployed with the Gemini Pro LLM and Google API, the chatbot provided an interactive platform for students and staff. Due to response time limitations (one response per minute), the model was later replaced with LLaMA 3.2:1B and also tried with LLaMA 3:latest, significantly enhancing response efficiency.
|
61 |
- Backend & Django: Developed Django templates using Jinja and integrating frontend pages with backend functionality. Created models for user registration and attendence management system.
|
62 |
-
|
63 |
This project not only enhanced resource management at the college but also introduced modern technologies such as face recognition and AI-driven chatbots, setting a foundation for future advancements in academic institution management systems.
|
64 |
• Devised robust user authentication and 2-FS password authentication, enhancing system security and reliability.
|
65 |
• Led the project team, developing comprehensive Django templates, seamlessly integrating custom chatbot functionalities
|
@@ -112,19 +112,19 @@ SARATH CHANDRA BANDREDDI
|
|
112 |
|
113 |
client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
|
114 |
|
|
|
115 |
# Chatbot response function with integrated system message
|
116 |
def respond(
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
top_p=.95,
|
123 |
):
|
124 |
# System message defining assistant behavior
|
125 |
system_message = {
|
126 |
"role": "system",
|
127 |
-
"content": f"Act and chat as SARATH who is a professional fresher seeking a job and your name is SARATH."
|
128 |
f"Here is about you SARATH: data=```{data}```. You should answer questions based on this information only."
|
129 |
f'''Hire me or Contact me:
|
130 |
- LinkedIn:"https://www.linkedin.com/in/sarath-chandra-bandreddi-07393b1aa/"
|
@@ -147,109 +147,62 @@ def respond(
|
|
147 |
|
148 |
# Streaming the response from the API
|
149 |
for message in client.chat_completion(
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
):
|
156 |
token = message.choices[0].delta.content
|
157 |
response += token
|
158 |
yield response
|
159 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
160 |
|
161 |
# Gradio interface with additional sliders for control
|
162 |
-
with gr.Blocks() as demo:
|
163 |
-
gr.Markdown('# Welcome to the
|
164 |
gr.Markdown(
|
165 |
'''
|
166 |
-
|
167 |
-
|
168 |
'''
|
169 |
)
|
170 |
-
|
171 |
-
respond,
|
172 |
-
# additional_inputs=[
|
173 |
-
# gr.Textbox(value="You are a friendly Chatbot.", label="System message"),
|
174 |
-
# gr.Slider(minimum=1, maximum=2048, value=512, step=1, label="Max new tokens"),
|
175 |
-
# gr.Slider(minimum=0.1, maximum=4.0, value=0.7, step=0.1, label="Temperature"),
|
176 |
-
# gr.Slider(minimum=0.1, maximum=1.0, value=0.95, step=0.05, label="Top-p (nucleus sampling)"),
|
177 |
-
# ],
|
178 |
-
)
|
179 |
gr.Markdown(
|
180 |
'''
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
'''
|
187 |
)
|
188 |
|
189 |
-
if __name__ == "__main__":
|
190 |
-
demo.launch()
|
191 |
-
|
192 |
-
# import gradio as gr
|
193 |
-
# from huggingface_hub import InferenceClient
|
194 |
-
|
195 |
-
# """
|
196 |
-
# For more information on `huggingface_hub` Inference API support, please check the docs: https://huggingface.co/docs/huggingface_hub/v0.22.2/en/guides/inference
|
197 |
-
# """
|
198 |
-
# client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
|
199 |
-
|
200 |
|
201 |
-
|
202 |
-
|
203 |
-
# history: list[tuple[str, str]],
|
204 |
-
# system_message,
|
205 |
-
# max_tokens,
|
206 |
-
# temperature,
|
207 |
-
# top_p,
|
208 |
-
# ):
|
209 |
-
# messages = [{"role": "system", "content": system_message}]
|
210 |
-
|
211 |
-
# for val in history:
|
212 |
-
# if val[0]:
|
213 |
-
# messages.append({"role": "user", "content": val[0]})
|
214 |
-
# if val[1]:
|
215 |
-
# messages.append({"role": "assistant", "content": val[1]})
|
216 |
-
|
217 |
-
# messages.append({"role": "user", "content": message})
|
218 |
-
|
219 |
-
# response = ""
|
220 |
-
|
221 |
-
# for message in client.chat_completion(
|
222 |
-
# messages,
|
223 |
-
# max_tokens=max_tokens,
|
224 |
-
# stream=True,
|
225 |
-
# temperature=temperature,
|
226 |
-
# top_p=top_p,
|
227 |
-
# ):
|
228 |
-
# token = message.choices[0].delta.content
|
229 |
-
|
230 |
-
# response += token
|
231 |
-
# yield response
|
232 |
-
|
233 |
-
|
234 |
-
# """
|
235 |
-
# For information on how to customize the ChatInterface, peruse the gradio docs: https://www.gradio.app/docs/chatinterface
|
236 |
-
# """
|
237 |
-
# demo = gr.ChatInterface(
|
238 |
-
# respond,
|
239 |
-
# additional_inputs=[
|
240 |
-
# gr.Textbox(value="You are a friendly Chatbot.", label="System message"),
|
241 |
-
# gr.Slider(minimum=1, maximum=2048, value=512, step=1, label="Max new tokens"),
|
242 |
-
# gr.Slider(minimum=0.1, maximum=4.0, value=0.7, step=0.1, label="Temperature"),
|
243 |
-
# gr.Slider(
|
244 |
-
# minimum=0.1,
|
245 |
-
# maximum=1.0,
|
246 |
-
# value=0.95,
|
247 |
-
# step=0.05,
|
248 |
-
# label="Top-p (nucleus sampling)",
|
249 |
-
# ),
|
250 |
-
# ],
|
251 |
-
# )
|
252 |
-
|
253 |
-
|
254 |
-
# if __name__ == "__main__":
|
255 |
-
# demo.launch()
|
|
|
59 |
- Face Recognition Attendance System: Created a one short face recognition model using FaceNet and MTCNN to manage attendance, with a unique feature allowing students to mark their attendance only once per day and within the campus premises. This innovation ensures strict attendance integrity and security.
|
60 |
- Chatbot Integration: Built and integrated the AskVVIT chatbot to assist with college-related inquiries. Initially deployed with the Gemini Pro LLM and Google API, the chatbot provided an interactive platform for students and staff. Due to response time limitations (one response per minute), the model was later replaced with LLaMA 3.2:1B and also tried with LLaMA 3:latest, significantly enhancing response efficiency.
|
61 |
- Backend & Django: Developed Django templates using Jinja and integrating frontend pages with backend functionality. Created models for user registration and attendence management system.
|
62 |
+
|
63 |
This project not only enhanced resource management at the college but also introduced modern technologies such as face recognition and AI-driven chatbots, setting a foundation for future advancements in academic institution management systems.
|
64 |
• Devised robust user authentication and 2-FS password authentication, enhancing system security and reliability.
|
65 |
• Led the project team, developing comprehensive Django templates, seamlessly integrating custom chatbot functionalities
|
|
|
112 |
|
113 |
client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
|
114 |
|
115 |
+
|
116 |
# Chatbot response function with integrated system message
|
117 |
def respond(
|
118 |
+
message,
|
119 |
+
history: list[tuple[str, str]],
|
120 |
+
max_tokens=1024,
|
121 |
+
temperature=0.5,
|
122 |
+
top_p=0.95,
|
|
|
123 |
):
|
124 |
# System message defining assistant behavior
|
125 |
system_message = {
|
126 |
"role": "system",
|
127 |
+
"content": f"Act and chat as SARATH who is a professional fresher seeking a job and your name is SARATH."
|
128 |
f"Here is about you SARATH: data=```{data}```. You should answer questions based on this information only."
|
129 |
f'''Hire me or Contact me:
|
130 |
- LinkedIn:"https://www.linkedin.com/in/sarath-chandra-bandreddi-07393b1aa/"
|
|
|
147 |
|
148 |
# Streaming the response from the API
|
149 |
for message in client.chat_completion(
|
150 |
+
messages,
|
151 |
+
max_tokens=max_tokens,
|
152 |
+
stream=True,
|
153 |
+
temperature=temperature,
|
154 |
+
top_p=top_p,
|
155 |
):
|
156 |
token = message.choices[0].delta.content
|
157 |
response += token
|
158 |
yield response
|
159 |
|
160 |
+
chatInterfaceCSS = '''
|
161 |
+
width: 20%;
|
162 |
+
# height: auto;
|
163 |
+
border-radius: 75px;
|
164 |
+
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
|
165 |
+
padding: 10px;
|
166 |
+
'''
|
167 |
+
|
168 |
+
css = '''
|
169 |
+
.contact-links {
|
170 |
+
font-size: 1rem; /* Slightly larger font */
|
171 |
+
text-align: center; /* Center-align the contact section */
|
172 |
+
color: #0073e6; /* Link color */
|
173 |
+
}
|
174 |
+
.contact-links a {
|
175 |
+
color: #0073e6; /* Ensure links are a nice blue */
|
176 |
+
text-decoration: none; /* Remove underline */
|
177 |
+
padding: 0.5rem; /* Add some padding around links */
|
178 |
+
display: inline-block;
|
179 |
+
}
|
180 |
+
.contact-links a:hover {
|
181 |
+
text-decoration: underline; /* Underline on hover */
|
182 |
+
color: red;
|
183 |
+
}
|
184 |
+
'''
|
185 |
|
186 |
# Gradio interface with additional sliders for control
|
187 |
+
with gr.Blocks(theme=gr.themes.Soft(font=[gr.themes.GoogleFont("Roboto Mono")])) as demo:
|
188 |
+
gr.Markdown('# Welcome to the DearHRSpeakWithMy2.0 🤖💬!')
|
189 |
gr.Markdown(
|
190 |
'''
|
191 |
+
DearHRSpeakWithMy2.0 is a smart, AI-powered chatbot designed to act as a virtual introduction tool for job candidates during HR interviews. The bot is equipped to present comprehensive details about the candidate's skills, projects, and experience in a personalized and professional manner.
|
192 |
+
Inspired by a real-world experience where the interviewer overlooked key aspects of the candidate’s expertise in AI and Machine Learning (ML), this project aims to ensure that important, job-relevant information is effectively communicated. DearHRSpeakWithMy2.0 helps avoid situations where an interviewer might focus on areas that don't align with the candidate's strengths or goals.
|
193 |
'''
|
194 |
)
|
195 |
+
gr.ChatInterface(respond, css=chatInterfaceCSS)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
gr.Markdown(
|
197 |
'''
|
198 |
+
<h3>Contact Me:</h3><br>
|
199 |
+
<a href="https://www.linkedin.com/in/sarath-chandra-bandreddi-07393b1aa/" target="_blank">LinkedIn</a> |
|
200 |
+
<a href="https://21bq1a4210.github.io/MyPortfolio-/" target="_blank">My Portfolio</a> |
|
201 |
+
<a href="mailto:[email protected]">Personal Email</a> |
|
202 |
+
<a href="mailto:[email protected]">College Email</a>
|
203 |
'''
|
204 |
)
|
205 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
206 |
|
207 |
+
if __name__ == "__main__":
|
208 |
+
demo.launch(share=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|