Skip to content

A mobile web application that will allow a user to scan a physical object and produce a 3 Dimensional model that can be viewed from any direction.

Notifications You must be signed in to change notification settings

COS301-SE-2020/3D-model-Binary-Vision

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

# 3D Model Binary Vision Project

Link To Live Project : https://flapjacks.goodx.co.za/

Build Status Codacy Badge

https://vanillav.github.io/FlapJacks.github.io/

A mobile web application that will allow a user to scan a physical object and produce a 3 Dimensional model that can be viewed from any direction. The user will also have the choice to build the model from cameras moving sub sections of video. The videos will consist of SFM clips. Once the model has been built, the user must be able to store both the stereo-lithography as well as the surface information (such as colouring). The user can select a stored model and generate a 3D render on a Web Page, where they will be able to rotate the model.

Demo Downloads:

Demo 1: https://drive.google.com/file/d/1urvzRHTTUYAgFXoPAeqhKeeuAuu2Y3_j/view?usp=sharing

Demo 2: https://drive.google.com/file/d/1a6tvSa4dhbSURALP31Q9EbFMByTVWWvP/view?usp=sharing

Demo 3: https://drive.google.com/file/d/1VfmeiANV4ApCIPXjxeQu1LsL8j8on6E-/view?usp=sharing

Demo 4: https://drive.google.com/file/d/1gvdl9_cOCZDT10tN7zqtIOTMi15_MZhO/view?usp=sharing


Documentation Download:

SR Specification: https://github.com/COS301-SE-2020/3D-model-Binary-Vision/blob/master/Documentation/System%20Requirements%20Specification.pdf

User Manual : https://github.com/COS301-SE-2020/3D-model-Binary-Vision/blob/master/Documentation/User%20Manual.pdf

Coding Standards : https://github.com/COS301-SE-2020/3D-model-Binary-Vision/blob/master/Documentation/Coding%20Standards%20Document.pdf

Technical Installation Manual : https://github.com/COS301-SE-2020/3D-model-Binary-Vision/blob/master/Documentation/User%20Manual.pdf

Testing Policy: https://github.com/COS301-SE-2020/3D-model-Binary-Vision/blob/master/Documentation/Test%20Policy.pdf


Project Management:

https://app.clubhouse.io/flapjacks301/stories/space/9/everything


Collaborators:

Rani Arraf

github.io: https://raniarraf.github.io/

Contributions:

Demo 1: Created the html and css for all pages, also included some js to help with the integration between the signup/login and the API. Created the forms for the signup and login. Revised over the final analysis of the system functionality.

Demo 2: Added to the html and css as to update the UI and UX, helped work on the API via front-end contribution in javascript. Made sure that all data is presented from the API on the website. Furthermore, ensured that all data is applicable to modern day dentists.

Demo 3: I worked on the html and the css as to update the User Interface, helped work on the API via front-end contribution in javascript (functionality that helped make information easily accessible visually). Made sure that all data is presented from the API on the website (double checked). Furthermore, ensured that all data is applicable to modern day dentists. For this demo I further ensured that the project had an entirely new look that overhauled from the previous demo. I ensured that it would be easier, visually, for both the doctor and the receptionist to use the software.

Demo 4: I worked on the front end integration system that allows for an eye appealing and easy to use software system. I have also worked on adding new features such as the mobile platform visualisation that allows users to view the program on mobile. To do these functionalities I used HTML5, javascript, and nodejs. I have also added a profile picture selection program that allows users to select a new profile picture. I have helped with research on the Integration testing and currently working on fixing the quality testing.


Quinn du Piesanie

github.io: https://quinnman202.github.io/

Contributions:

Demo 1: Created simple javascript webGL program that renders verticies that are passed to it. For documentation, I created a use-case diagram from retrieving patient information and editing patients.

Demo 2: Researched and created the first step to 3d Point cloud generation (Camera calibration). Created the deployment model diagram.

Demo 3: Implemented SFM algorithm to generate a point cloud from given images.

Demo 4: Completion of SFM algorithm pipeline to render textured model files.


Jacobus Janse van Rensburg

github.io: https://jacobus1998.github.io/

Contributions:

Demo 1: Created the database and the Node.js API that connects the front and back end. I was part of the Integration process to integrate the front end and back end. For the documentation I did the Introduction, functional and quality requirements. I also set up the basic document that the team then changed and inserted into as they seemed nescesary and what was assigned to them.

Demo 2: Created API functions to handle requests of add patients, view patients, view single patients send emails, creating and clearing cookie data , make sure all data returned belongs to a certain doctor. Database queries to get the information on the backend. Created the Front End API calls as well.

Demo 3: Developed API Functions for the receptionist and the doctor to manage the booking system that we developed as well as front end javascript files with functions to improve our signup system along with dynamic page api calls and populations of information that the api returns. Developed the dynamic booking page and the weekly pladder page that shows all the doctors bookings for the comming week.

Demo 4: Developed the fuzzy logic algorithm to find open booking slots for the receptionist. Created the qr code generator used for a practice too have a unique place where the patients can enter their information. Integration of the generic algorithm with the API and front end. Emailing System that is used to change passwords and ensure that a head receptionist accepts a signup before they can access records.


Steven Visser

github.io: https://vanillav.github.io/

Contributions:

Demo 1: Created and managed the user stories using the project management tools and set deadlines for work to be completed. Created the Use Case Diagram for the Media & Render Subsystem. Revised the System Requirements Specification Document and created Functional Requirements. Implemented Unit Testing for the Signup operations with the API integration.

Demo 2: Assisted in the research for the 3D model render from SFM footage. I integrated the code for the high level render from an STL file for the patients model. I set up the HTML page to take an STL file as input from the user. Updated the System Requirement Specification to meet the new demo requirements by adding in new Use cases and functional requirements, as well as updating the old use case diagrams. I also made the user manual for the system.

Demo 3: Helped to extend the API and created and implemented new functions. I typed the coding standards document. I typed the Technical Installation Manual and I updated the User Manual to work for our new system redesign. Went through the system and documented all bugs & fixes that needed to be made and then fixed alot of them too.

Demo 4: I kept the project management tool up to date. I updated most of documentation to match the latest system. Updated the booking system to remove the deletion of data and change the bookings that can be viewed. I implemented the consultation record page and the saving of consultations. I created the Log System, which keeps track of all user interactions with the system. I set up the SSL certificate in the configuration of the server and changed the rendering page to render OBJ files instead of STL files.


Marcus Werren

github.io: https://marcuswerren.github.io/

Contributions:

Demo 1: I created the tractability matrix for the documentation of the system. I implemented the Video capture functionality that allows the users to send a live recorded video or a pre-recorded video from the clients machine to back end for processing. I also contributed to the unit testing of the systems API calls.

Demo 2: I was in control of unit testing and code integration for the project. I implemented some new unit tests for the api, and linked travis ci (the continuous integration tool we are using) to our Github repository so that when a commit is sent the repository it will be built and then run the unit tests to see that meaningful code was uploaded. I also edited the demo video.

Demo 3: I contributed to the new user interface, by adding/updating the html, CSS and JavaScript. I also updated the video capture page, the page now sends the blob images to the back end, instead of the full video. Lastly I was in control of the unit testing, I had to update all the tests in accordance to the new API calls made, Travis CI code was also updated.

Demo 4: I did most of the testing for the project. I finished the integration and unit testing with the latest updates to the API. I also made a few updates towards the front end.


End of README

About

A mobile web application that will allow a user to scan a physical object and produce a 3 Dimensional model that can be viewed from any direction.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 6