Jump to content

CompSci Major Research Project

Zalosath

Hi all!

 

I'm in my final year of Computer Science and I'm looking for major project ideas!

Criteria:

- Needs to be a gap in research, I.E. there can be existing research papers on the topic, but I must expand on them in some way.

- Complex enough for an 8000 word report.

- A reasonable amount of existing research in the area (lit review)

 

A couple of ideas I had are as follows;
- Computer Vision system for detecting a failing 3D print as it prints, was declined by project lead because "systems like this already exist and there's little to expand into".
- Some kind of maze generator with an optimized path finder to complete it (didn't get chance to flesh this one out before, 1. realising it's not complex enough, and 2. it getting declined by the project lead)

 

Unfortunately, both have been declined by my project lead for various reasons. 

 

I love AI, algorithms, games (making), 3D printing, computers, programming, talking with friends, food! 

Project lead says to avoid AI though since previous students have had trouble in this area.

 

So, do you have any ideas?

 

If not, do you have any daily activities that can be improved by use of software?

 

I greatly appreciate any and all ideas, I really need some inspiration here! Cheers!

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

I may be totally wrong, but my understanding is that with a lot of the less expensive 3D printers: the print just happens at one speed and feed rate. If you were able to use AI to more than just detect a failed or failing print, and were able to create some algorithm/program to dynamically adjust the feed and print speed to account for more complex detail requiring finer precision, I think that would be very beneficial assuming a piece of software that already can do that doesn't exist.

 

 

A bit ago (1-2 weeks?) There was a Shortcircuit for a 3D printer and Dan had it just go ham on the print to see how it would turn out and it lost its centering from zooming across too quickly. If you had an AI program watching the print head and cross referencing it against where it should be in relation to static objects of known location such as the frame of the printer and then if the print head was not in the correct place after x number of location samples, it could send an interrupt to the printer to recenter the X and Y axes to recalibrate. 

 

 

As for the 8000 word part, best advice I can give you is to just talk. Talk and explain it like you would to someone like me who knows a little bit about it but not enough to be an expert. 8000 words comes up very quickly. I also have rapid fire thought processes (ADHD superpower) so I frequently use speech to text when I get Into a rhythm.

 

Just laying out a basic framework here of what the idea could be: 282 words. 

 

When I had to do a procedure writeup for microbiology and needed it to be "straightforward such that anyone at any experience level and background could enter the lab and perfectly replicate your experiment", it ended up at something like 268 steps and like 12 pages of instructions and background. So my second piece of advice: approach the framework as someone who understands the basics, and lay out the explanation so that (nearly) anyone could pick it up and be able to follow what you are doing.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, BiotechBen said:

I may be totally wrong, but my understanding is that with a lot of the less expensive 3D printers: the print just happens at one speed and feed rate. If you were able to use AI to more than just detect a failed or failing print, and were able to create some algorithm/program to dynamically adjust the feed and print speed to account for more complex detail requiring finer precision, I think that would be very beneficial assuming a piece of software that already can do that doesn't exist.

 

 

A bit ago (1-2 weeks?) There was a Shortcircuit for a 3D printer and Dan had it just go ham on the print to see how it would turn out and it lost its centering from zooming across too quickly. If you had an AI program watching the print head and cross referencing it against where it should be in relation to static objects of known location such as the frame of the printer and then if the print head was not in the correct place after x number of location samples, it could send an interrupt to the printer to recenter the X and Y axes to recalibrate. 

 

 

As for the 8000 word part, best advice I can give you is to just talk. Talk and explain it like you would to someone like me who knows a little bit about it but not enough to be an expert. 8000 words comes up very quickly. I also have rapid fire thought processes (ADHD superpower) so I frequently use speech to text when I get Into a rhythm.

 

Just laying out a basic framework here of what the idea could be: 282 words. 

 

When I had to do a procedure writeup for microbiology and needed it to be "straightforward such that anyone at any experience level and background could enter the lab and perfectly replicate your experiment", it ended up at something like 268 steps and like 12 pages of instructions and background. So my second piece of advice: approach the framework as someone who understands the basics, and lay out the explanation so that (nearly) anyone could pick it up and be able to follow what you are doing.

Cool ideas! I'll have a look for that video and see if I can do some research into existing technologies for things like this. Although I really have been strongly advised to avoid AI (even though I'd really like to do it!). I'll create more specific research questions from your suggestions and see what my project lead says. Thanks!

 

Thanks for the advice, I think I can probably fill the 8000 words with some kind of AI project, if I'm allowed to do one, and if not, well, 8000 is just a maximum, there is technically no minimum 🙂 

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Zalosath said:

8000 is just a maximum, there is technically no minimum 🙂 

Even better. With the many papers I've presented and presentations done through my schooling, I've never had an issue meeting the minimum, always staying within the maximum, and that has netted me a lowest grade of 92%, and that was a presentation sophomore year of highschool on a book that I never actually finished reading. 

 

34 minutes ago, Zalosath said:

Cool ideas! I'll have a look for that video and see if I can do some research into existing technologies for things like this. Although I really have been strongly advised to avoid AI (even though I'd really like to do it!).

Machine learning is something that is only going to be getting more and more important. If you were able to write a self-correcting printer algorithm, I feel like that would be something worth defending a thesis on.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zalosath said:

A couple of ideas I had are as follows;
- Computer Vision system for detecting a failing 3D print as it prints, was declined by project lead because "systems like this already exist and there's little to expand into".
- Some kind of maze generator with an optimized path finder to complete it (didn't get chance to flesh this one out before, 1. realising it's not complex enough, and 2. it getting declined by the project lead)

Yea, I understand why those ideas would be rejected...the maze generator would be a very small project, there are already optimal algorithms for solving a maze.  There are already equivalents out on the market for detection of 3d print fails (dating back at least 2 years)...but if you are into doing something like that then may I propose doing doing like auto-generation of a 3d environment/objects from a video feed?

 

The technology already "exists" but a large majority is stuck behind proprietary code or has imperfections to it.  e.g. some of the modern systems require tons of photos and lots of compute time.

 

Like a general concept could be to almost attempting to do what Tesla's autopilot is doing in the background with occlusion networks (except doing it on consumer hardware).

 

e.g. At the 1 hour 12 min mark onward...creating a voxel system from a camera feed (along with estimated voxel trajectories)

 

I would imagine that it's probably quite an open set of research, mixing neural networks and likely classical AI.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, BiotechBen said:

Machine learning is something that is only going to be getting more and more important. If you were able to write a self-correcting printer algorithm, I feel like that would be something worth defending a thesis on.

You're right about this, I do wonder if I could convince the project lead to let me do something like that. I have a meeting with them tomorrow so I'll bring it up. Cheers!

 

16 minutes ago, wanderingfool2 said:

Yea, I understand why those ideas would be rejected...the maze generator would be a very small project, there are already optimal algorithms for solving a maze.  There are already equivalents out on the market for detection of 3d print fails (dating back at least 2 years)...but if you are into doing something like that then may I propose doing doing like auto-generation of a 3d environment/objects from a video feed?

 

The technology already "exists" but a large majority is stuck behind proprietary code or has imperfections to it.  e.g. some of the modern systems require tons of photos and lots of compute time.

 

Like a general concept could be to almost attempting to do what Tesla's autopilot is doing in the background with occlusion networks (except doing it on consumer hardware).

 

e.g. At the 1 hour 12 min mark onward...creating a voxel system from a camera feed (along with estimated voxel trajectories)

 

I would imagine that it's probably quite an open set of research, mixing neural networks and likely classical AI.

I love this idea, I would cast doubt that I'd be able to fully complete it though, it's a large undertaking especially given that I will be doing 2 other projects alongside this one.

Maybe I could simplify this in some way, but then again that's not really a gap in research at that point. 

 

I have a couple of AI related ideas now, appreciate it! Just in-case I get a solid "Don't do an AI project", do either of you have any ideas away from that area? 

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zalosath said:

I love this idea, I would cast doubt that I'd be able to fully complete it though, it's a large undertaking especially given that I will be doing 2 other projects alongside this one.

Maybe I could simplify this in some way, but then again that's not really a gap in research at that point. 

 

I have a couple of AI related ideas now, appreciate it! Just in-case I get a solid "Don't do an AI project", do either of you have any ideas away from that area? 

yea, it might be a large undertaking...although you might be able to find some open source stuff that gets you on your way...so then it allows you to focus on algorithms and other concepts to improve the system or do research into it.

 

I'm not really sure of other topics.  To be honest, most things now and days end up being research papers on AI (in the most broad terms of things).

 

I guess you could do other research such as lighting a scene and trying to reduce the complexity of that while maintaining the fidelity of ray tracing...but I think things like that are already so advanced that it's to a point that a single person might not realistically be able to do as much.

 

I'm not sure if you watched Two Minute Papers before on YouTube.  You might be able to get a few ideas from the papers presented there

https://www.youtube.com/c/KárolyZsolnai/videos
 

While it might not be actually be stuff that you could do, it could give you a few ideas on what you might want to explore in your research paper

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zalosath said:

You're right about this, I do wonder if I could convince the project lead to let me do something like that. I have a meeting with them tomorrow so I'll bring it up. Cheers!

 

I love this idea, I would cast doubt that I'd be able to fully complete it though, it's a large undertaking especially given that I will be doing 2 other projects alongside this one.

Maybe I could simplify this in some way, but then again that's not really a gap in research at that point. 

 

I have a couple of AI related ideas now, appreciate it! Just in-case I get a solid "Don't do an AI project", do either of you have any ideas away from that area? 

Something that I've been wanting to learn to code for (it may be far too simple in your case) is environmental monitoring. I went to a local government environmental talk and the point that they were hammering home was that with drainage swales: the hardest component is ensuring that the drainage is happening at a sustainable rate so that the wetlands life can remain habitable. The goal is to have water retention for 2-3 days but they don't have a the manpower or resources to monitor all 1000+ swales in the municipality and adjust the size of the drainage iris size on the exit in real time to allow for optimal draining. This is compounded by debris and silt blocking the entrance to the drain. If something like a Ras pi or Arduino were able to report water level, environmental conditions, and flow rate inside the drain: it would be much easier to manage the swales.

 

My idea is that each station has a Ras pi or Arduino that has a camera, a few environmental sensors, and a sensor to measure water level. As the flow increases or decreases, the water level will drop or remain steady. Now this is the hard part: somehow the iris of the drain is adjustable to manage flow rate (optimal is usually between 1-2in diameter). This would use a small amount of AI/machine learning eventually to increase efficiency, but atm would be a program with a few formulas whose results cross reference a table that runs some if-then code. The part that might be an issue is the reporting of conditions remotely. I would likely have the station report maybe twice per day with a picture and sensor readings to a server or database. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, wanderingfool2 said:

yea, it might be a large undertaking...although you might be able to find some open source stuff that gets you on your way...so then it allows you to focus on algorithms and other concepts to improve the system or do research into it.

 

I'm not really sure of other topics.  To be honest, most things now and days end up being research papers on AI (in the most broad terms of things).

 

I guess you could do other research such as lighting a scene and trying to reduce the complexity of that while maintaining the fidelity of ray tracing...but I think things like that are already so advanced that it's to a point that a single person might not realistically be able to do as much.

 

I'm not sure if you watched Two Minute Papers before on YouTube.  You might be able to get a few ideas from the papers presented there

https://www.youtube.com/c/KárolyZsolnai/videos
 

While it might not be actually be stuff that you could do, it could give you a few ideas on what you might want to explore in your research paper

I'll have to take a look around, I'm certain there'll be something that could help.

 

It's strictly a physical project, I.E. we cannot just do a research paper so that rules that out.

 

Yeah like you say projects like that are already so advanced that I'm not sure I could add anything to them. Cool idea though!

 

I haven't seen that before but it looks very promising, I'll definitely have a look through those, thanks.

 

1 minute ago, BiotechBen said:

Something that I've been wanting to learn to code for (it may be far too simple in your case) is environmental monitoring. I went to a local government environmental talk and the point that they were hammering home was that with drainage swales: the hardest component is ensuring that the drainage is happening at a sustainable rate so that the wetlands life can remain habitable. The goal is to have water retention for 2-3 days but they don't have a the manpower or resources to monitor all 1000+ swales in the municipality and adjust the size of the drainage iris size on the exit in real time to allow for optimal draining. This is compounded by debris and silt blocking the entrance to the drain. If something like a Ras pi or Arduino were able to report water level, environmental conditions, and flow rate inside the drain: it would be much easier to manage the swales.

 

My idea is that each station has a Ras pi or Arduino that has a camera, a few environmental sensors, and a sensor to measure water level. As the flow increases or decreases, the water level will drop or remain steady. Now this is the hard part: somehow the iris of the drain is adjustable to manage flow rate (optimal is usually between 1-2in diameter). This would use a small amount of AI/machine learning eventually to increase efficiency, but atm would be a program with a few formulas whose results cross reference a table that runs some if-then code. The part that might be an issue is the reporting of conditions remotely. I would likely have the station report maybe twice per day with a picture and sensor readings to a server or database. 

Hm that's interesting, I think the main trouble I'd have with a project like this is testing it, I suppose I could set up some kind of simulator but I'm a programmer not a plumber 😄 

I do wonder if that could spark another idea though, something environmental could be cool, though the basic soil monitoring systems are probably not too easy to expand upon.

 

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, BiotechBen said:

If you had an AI program watching the print head and cross referencing it against where it should be in relation to static objects of known location such as the frame of the printer and then if the print head was not in the correct place after x number of location samples, it could send an interrupt to the printer to recenter the X and Y axes to recalibrate. 

you don't need "AI" for this, in fact you don't need a vision system either, you just need a good rotatory encoder.

13 hours ago, BiotechBen said:

Machine learning is something that is only going to be getting more and more important. If you were able to write a self-correcting printer algorithm, I feel like that would be something worth defending a thesis on.

machine learning isn't the solution to everything. just because you have a hammer doesn't mean everything is a nail. this is a problem where machine learning would probably dramatically underperform compared to simple automated controls. Plus, good AI research requires expensive hardware, huge labeled datasets and months or years of trial and error. It's a bad idea to look to AI/ML for a graduation project unless you're going to use an existing AI model as a part of a broader system.

 

@Zalosath it's really hard to suggest something feasible without knowing what you've been studying and what equipment you have access to, do your professors not have any suggestions for you?

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Sauron said:

 

@Zalosath it's really hard to suggest something feasible without knowing what you've been studying and what equipment you have access to, do your professors not have any suggestions for you?

Good point, so I'm on a course called Software Engineering, we have modules such as programming, mathematics, databases & networks, design patterns, mobile app dev, HCI, IoT, and the optional modules I've picked are web dev and cybercrime security (we don't start those 2 until next semester). 

I have a PC, it's listed in my signature, R9 3900X, 3090FE, 32GB RAM, which is decent at doing AI stuff, but I think you're probably right in saying that staying away from AI and ML is probably for the best.

 

Professors are against giving ideas, they can help us flesh out existing ideas, but they won't think of one for us. So, if I presented them with "Computer vision with data analytics" they're probably not going to be able to help with that, but if I presented "Computer vision to solve jigsaw puzzle with edge detection" there's a lot more that they can work with on that. 

 

And before you say to do the jigsaw puzzle, it exists already, and I don't think I could improve it.

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Zalosath said:

And before you say to do the jigsaw puzzle, it exists already, and I don't think I could improve it.

How about... that, but on a phone app that has to deal with bad camera angles and not seeing all the pieces at the same time?

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Sauron said:

How about... that, but on a phone app that has to deal with bad camera angles and not seeing all the pieces at the same time?

Potentially, some kind of scale/shape correction to try and figure out the "actual" size and shape of the piece. I'll have a look into that. Do you think it would be enough?

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Zalosath said:

Potentially, some kind of scale/shape correction to try and figure out the "actual" size and shape of the piece. I'll have a look into that. Do you think it would be enough?

Completely solving that problem would be very hard, I'd say there's plenty

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Sauron said:

Completely solving that problem would be very hard, I'd say there's plenty

Great! I'll ask my project lead. I appreciate it.

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

2d polygon nesting. the maths are very interesting but quite complex. This type of algorithm does solve real world problem such as optimizing material cutting in manufacturing. Also can be pushed into 3d nesting for more problem solving in 3d print, how to fill a 52' truck with optimal weight repartition, and more.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Franck said:

2d polygon nesting. the maths are very interesting but quite complex. This type of algorithm does solve real world problem such as optimizing material cutting in manufacturing. Also can be pushed into 3d nesting for more problem solving in 3d print, how to fill a 52' truck with optimal weight repartition, and more.

I like this, especially the 3D printing applications it can provide. Existing solutions for auto placing objects in slicers are pretty crap, this could be a great way to improve them. Cheers for the idea!

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

What about auto-slicer that auto slices 3d models for printing?

 

I know they do a lot of calculations to fly planes (the weight distribution matters a lot) what about automating that?

Link to comment
Share on other sites

Link to post
Share on other sites

Write a software that trades stock and consistently return above the market profit. If you can pull this off, you are set for life. The only person I can think of that can boast about accomplishing these are the people behind the medallion fund.

Sudo make me a sandwich 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Wictorian said:

What about auto-slicer that auto slices 3d models for printing?

 

I know they do a lot of calculations to fly planes (the weight distribution matters a lot) what about automating that?

Slicers exist already, there isn't anything I could do, bar polygon nesting, to improve the existing systems.

 

As for the flying planes, how would I test that simply? RC plane would likely be costly and unlikely to fly well if I have to attach new parts to it.

Or do you mean make a calculator to determine the best places to put objects on a plane to have an even distribution? I'm sure systems like this already exist and it won't be complex enough for a project.

 

2 hours ago, wasab said:

Write a software that trades stock and consistently return above the market profit. If you can pull this off, you are set for life. The only person I can think of that can boast about accomplishing these are the people behind the medallion fund.

If I could make that then I can assure you I wouldn't be at university! I could do something related to data processing and statistical analysis though, I'll have a look around that area. 

Main PC [ CPU AMD Ryzen 9 7900X3D with H150i ELITE CAPPELIX  GPU Nvidia 3090 FE  MBD ASUS ROG STRIX X670E-A  RAM Corsair Dominator Platinum 64GB@5600MHz  PSU HX1000i  Case Lian Li PC-O11 Dynamic  Monitor LG UltraGear 1440p 32" Nano IPS@180Hz  Keyboard Keychron Q6 with Kailh Box Switch Jade  Mouse Logitech G Pro Superlight  Microphone Shure SM7B with Cloudlifter & GoXLR ]

 

Server [ CPU AMD Ryzen 5 5600G  GPU Intel ARC A380  RAM Corsair VEGEANCE LPX 64GB  Storage 16TB EXOS ]

 

Phone [ Google Pixel 8 Pro, 256GB, Snow ]

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/6/2022 at 10:29 AM, Zalosath said:

Hi all!

 

I'm in my final year of Computer Science and I'm looking for major project ideas!

Criteria:

- Needs to be a gap in research, I.E. there can be existing research papers on the topic, but I must expand on them in some way.

- Complex enough for an 8000 word report.

- A reasonable amount of existing research in the area (lit review)

One that I've always wanted to do, and came close before a family disaster stopped me doing my hobbies for a couple of years:

Be warned, that this one is extremely intensive in the higher order maths:

Languages definable by parallel string rewriting are proven to be equivalent to languages definable by sequential string rewriting. In a manner of speaking, this means that a generalized L-System can define any language.

The project that I want to undertake is two-fold:

  1. Prove that it is possible to write a useful programming language where-in parsing happens on all tokens in "parallel" rather than sequentially. 
    1. From some early experiments I conducted with plain string rewriting, this is actually fairly easy to parallelize execution on in all cases. General rules can be derived for actually running a parallel string rewriting algorithm that can handle both context free and left-right context sensitive grammars in parallel. This can speed up processing of large inputs by an incredible amount.
    2. This can generate huge amounts of user-defined patterned and possibly stochastic data quickly (enough to crash a 32GB RAM machine in about 5 seconds using just an i7-6700HQ when doing things in-memory)
  2. A mini-language for object searching and manipulation can be created, similar in some ways to LINQ or PLINQ, except that, by the very definition of a parallel string rewriting system, the language used would allow for "automatic" parallelization of the searching, creation, and manipulation of a collection of objects.

 

L-Systems and parallel rewriting languages are definitely an open area of research in this regard. Most existing research has to do with their usefulness in describing biological growth/processes or for drawing curves.

One way to see if this is an area worth exploring further is to try to convince yourself that the string rewriting rules used in L-Systems are the same as would be defined for a Chomsky grammar using BNF.

 

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×