Abdul Rauf
Artificial Intelligence
- Report this post
I finally completed my final project for CS50SQL! Initially, I thought of creating a database for Spotify or Apple Music, as they suggested these ideas. However, while playing some football, I got an idea: why not create a database for a project I was already working on called MDP (Multiple Disease Predictor)?So, I started working on it on September 4th and finally completed it today. The requirements were to fill out a form and create a short video no longer than 3 minutes. Anyway i don't want to explain everything here (cause then it be a long msg 🙂), but if you're interested, check out the repo , I'll provide the link below!repo : https://lnkd.in/eXuipaTP#CS50SQL #MultipleDiseasePredictor #DatabaseProject
13
1 Comment
Transcript
Hi everyone, I'm Rauf and this is my final project for CS50 which is a MDB database or multiple disease predicter database. Basically MDB is ml based fast API application I'm working on. So I thought why don't I create a database for this rather than creating a database for Spotify or other platform which already have their own one. OK so let's start with ER diagram or entitty relation diagram so basically in ER diagram you You can see the one to many relationship for user table to info info table and then u can see the user user table have the one to many relation with the prediction table. And you can also see the user table have the 1 to many relationship with history table. So basically in info you find the information about the user. And in the prediction table you find the pridictions, the made for the user and in history table to find the history of the user like that is it and other stuff like that. So basically this is the ER diagram for the for my project. And now let's see the design dot MD. So in the designe.md and we write in the design dot MD I write the design of the database like the scope of the database, the functional functional requirements, representation like entities and relationship of tables and optimization. I used like indexes to make the queries fast and. He paused. And efficient. So after that. Now let's see the schema.sql . Lorsque light. It's good code to create the tables I shown in the ER diagram and then I created the primary and foreign key in the tables to make relationship and in the end I create the indexes on this table for optimization. So now here you can see all the code of this, first the tables and then in the end you can see these are the indexes. So these are highlighted, these are the indexes. OK, now after that let's see the query store SQL. So in the query the SQL I read some basic queries so the user can use to get some information and retrieve something from the database. So these are the some 10 queries and this is and these 10 queries will ask some certain questions like what does the information of this data ID of the this user which ID is equal to this and then you can see other queries 2. So yeah, this was all. Thank you for watching.
Noman Ishfaq
Data Science | Python lover| Turning Data into Impactful Insights | ML Enthusiast
1mo
- Report this comment
Good work. Hope to learn more from your projects in future too. Keep those coming.
1Reaction
To view or add a comment, sign in
More Relevant Posts
-
N NITHARSHANA FATHIMA
Mathematics Teacher at singaram Pillai girls higher sec school
- Report this post
Day 23 of #100DaysofNoCode 💯Today I learned about Notion databases and how to build a blog using @NotionHQ & @feather_blogsHere's my blog 👉 https://lnkd.in/gkkGPhen
3
Like CommentTo view or add a comment, sign in
-
Akshay Thengne
- Report this post
Me, trying to deploy a simple data pipeline: Everything's ready, just need to push the button...Dependency gremlins: "Hold on there, buddy. Seems like version X.Y.Z of library A requires version W.V.U of library B, which conflicts with version M.N.O you have installed for library C..."Me: (╯°□°)╯︵ ┻━┻ 🤦♂️ #dataengineering #dependencies #softwaredevelopment #humor #relatable #LinkedInLaughs 🤖💾🔗
4
Like CommentTo view or add a comment, sign in
-
Dominika Kaźmierczak
Frontend Developer at Enabler
- Report this post
I've been using Prisma for one of our clients for the past year, not necessarily understanding what's happening "behind the scenes". As we wanted to improve performance, we started using Knex -SQL query builder, and Bun. The fun began when I needed to convert the "easily written" Prisma queries to Knex. It then became quite important to know SQL because, after all, it's the foundation. When you want to test if the code you wrote is correct and you're fetching the right things, the best way is to test your generated SQL code. So, I took and completed an SQL course from Codecademy. Cheers! 🙌
19
Like CommentTo view or add a comment, sign in
-
SQLMaestros
1,302 followers
- Report this post
[New Free #SQLServer Video Alert]Parsing SP_Server_Diagnostics Output (by Amit R S Bansal)Watch the full video on SQLMaestros https://bit.ly/sqlmaestrosPls Share.
Like CommentTo view or add a comment, sign in
-
LangChain
292,244 followers
- Report this post
🕸️New Docs: Querying Graph DBs🕸️Graph DB's are great for capturing many real world relationships but can be cumbersome to interact with. The ability of LLM's to extract relationships and write complex structured queries is making it practical and valuable to use graph DB's in a wide range of applications.Our favorite graphista Tomaz Bratanic has contributed in-depth docs on using LangChain to build graph DB querying systems. These include:- A quickstart- How to handle high cardinality properties- How to build safer and more robust systems with narrowly-scoped tools- General prompting strategies- Specific graph DB integrationsGive it a read: https://lnkd.in/gEG55ezA
361
10 Comments
Like CommentTo view or add a comment, sign in
-
Ved Asole
Software Engineer @HCLTech | Full Stack Java Developer | Mastering Java Ecosystem | Troubleshooting Whiz | I empower Employers to Boost Performance through Java, Spring Framework, and Microservices.
- Report this post
💡 DSA Day 8: Understanding Time and Space Complexities in 1D Arrays! 💡Time and space complexity analysis are crucial for assessing the efficiency and memory usage of algorithms and data structures. Let's delve into the time and space complexities of various operations in 1D arrays:1️⃣ 𝗧𝗶𝗺𝗲 𝗮𝗻𝗱 𝗦𝗽𝗮𝗰𝗲 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝟭𝗗 𝗔𝗿𝗿𝗮𝘆𝘀:𝗖𝗿𝗲𝗮𝘁𝗶𝗻𝗴:Time Complexity: O(1) - Constant time complexity. Creating a 1D array involves allocating memory for a fixed number of elements, which is a direct operation.Space Complexity: O(n) - Linear space complexity. The space required for a 1D array is proportional to the number of elements.𝗔𝗰𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀:Time Complexity: O(1) - Constant time complexity. Accessing elements in a 1D array by index is a direct operation.Space Complexity: O(1) - Constant space complexity. Accessing elements does not require additional space.𝗜𝗻𝘀𝗲𝗿𝘁𝗶𝗼𝗻 𝗮𝘁 𝗘𝗻𝗱:Time Complexity: O(1) - Constant time complexity. Inserting an element at the end of a 1D array is a direct operation.Space Complexity: O(1) - Constant space complexity. Insertion at the end does not require additional space.𝗜𝗻𝘀𝗲𝗿𝘁𝗶𝗼𝗻 𝗮𝘁 𝗕𝗲𝗴𝗶𝗻𝗻𝗶𝗻𝗴 𝗼𝗿 𝗠𝗶𝗱𝗱𝗹𝗲:Time Complexity: O(n) - Linear time complexity. Inserting an element at the beginning or middle of a 1D array requires shifting elements, leading to linear time complexity.Space Complexity: O(n) - Linear space complexity. Insertion may require additional space for shifting elements.𝗧𝗿𝗮𝘃𝗲𝗿𝘀𝗶𝗻𝗴:Time Complexity: O(n) - Linear time complexity. Traversing all elements of a 1D array involves visiting each element once.Space Complexity: O(1) - Constant space complexity. Traversing does not require additional space.𝗟𝗶𝗻𝗲𝗮𝗿 𝗦𝗲𝗮𝗿𝗰𝗵𝗶𝗻𝗴:Time Complexity: O(n) - Linear time complexity. Searching for an element in an unsorted 1D array requires checking each element sequentially.Space Complexity: O(1) - Constant space complexity. Searching does not require additional space.𝗗𝗲𝗹𝗲𝘁𝗶𝗻𝗴:Time Complexity: O(n) - Linear time complexity. Deleting an element from a 1D array requires shifting elements, leading to linear time complexity. However, if the element is deleted from the end, the time complexity is O(1).Space Complexity: O(1) - Constant space complexity. Deletion does not require additional space, especially when deleting an element at the end (O(1) time complexity).Understanding both time and space complexities associated with different operations in 1D arrays is crucial for designing efficient algorithms and optimizing performance.Check out my GitHub repository for more Data Structures and Algorithms implementations and practice problems:https://lnkd.in/g5ZbAkrSStay tuned for DSA Day 9, where we'll dive into multi-dimensional (2D) arrays and their complexities! 🚀💻 #DataStructures #Arrays #TimeComplexity #SpaceComplexity #TechLearning #DSA #DSADay8 #Java #DS
3
Like CommentTo view or add a comment, sign in
-
Stefano Fago
Software Solutions Architect R&D
- Report this post
https://lnkd.in/dFzuUY3x<< ...Ingestr is a command-line application that allows you to ingest data from any source into any destination using simple command-line flags, no code necessary... >>
Like CommentTo view or add a comment, sign in
-
Bryon Robidoux
Assistant Vice President Product Development
- Report this post
Yesterday, Andrew Chan, IFRI Certified talked about the proof of concept for an actuarial Excel Lambda library. The current library which I created https://lnkd.in/gDYvAdMt just wraps a call to get mortality tables from https://mort.soa.org. But the other side of the spectrum is wrap the Fed's API https://lnkd.in/gtJDrBCK in similar Lambda functions so you can get market data. This may be overkill because the FRED Excel plugin is quite good, which you can find at https://lnkd.in/g9skdFcT. I get my hopes up that we can now build reusable Excel version controlled libraries that allow actuaries to quickly build calculations that are easy to read and understandable to external parties, especially IT developers. Furthermore, Lambda calculus leads to purity. Purity is having a function in your program working like a function in math. A particular input will always give me the same result, which is not true in imperative paradigms. Purity makes programs very easy to massively parallelize calculations, which is why Hadoop used the same concept. Now Excel can be fun and interesting! What if we could imbed Haskell, https://www.haskell.org/, in Excel formulas? That would be the day!
2
Like CommentTo view or add a comment, sign in
-
Stefano Fago
Software Solutions Architect R&D
- Report this post
https://lnkd.in/dAXn8cA7<< ...Reladiff is a high-performance tool and library designed for diffing large datasets across databases. By executing the diff calculation within the database itself, Reladiff minimizes data transfer and achieves optimal performance... >>
Like CommentTo view or add a comment, sign in
-
DataPlatformGeeks
923 followers
- Report this post
[#SQLServer Video Archives] Removing the SORT Operator (by Amit R S Bansal SQLMaestros)Watch the full video: https://lnkd.in/gNgbghkBMore tutorials here: https://lnkd.in/eGV-TvyHShare.#SQLServerWithAmitBansal
Like CommentTo view or add a comment, sign in
5,051 followers
- 206 Posts
View Profile
FollowExplore topics
- Sales
- Marketing
- IT Services
- Business Administration
- HR Management
- Engineering
- Soft Skills
- See All