Jump to content

Which language for back-end, front-end and logic

RRGT19

Hi,

I have to create a desktop application for a client.

 

I know Java, HTML, CSS, .NET, PHP, JavaScript.

But, if I need to choose another one, I can learn about it quickly.

 

What language is better for the back-end, front-end and all the logic?.

Link to comment
Share on other sites

Link to post
Share on other sites

well of youre doing it in a browser then html,CAS,JavaScript for the front end php for the back.

 

if you're making a desktop application .net or java.

                     ¸„»°'´¸„»°'´ Vorticalbox `'°«„¸`'°«„¸
`'°«„¸¸„»°'´¸„»°'´`'°«„¸Scientia Potentia est  ¸„»°'´`'°«„¸`'°«„¸¸„»°'´

Link to comment
Share on other sites

Link to post
Share on other sites

Given what you say you know I would probably consider one of these for a desktop application

  • C# (WPF)
  • Java (JavaFx)
Link to comment
Share on other sites

Link to post
Share on other sites

I'd argue for a web-based application, because you don't have to worry much about system compatibility. You'll get more marketability if you can say your thing can run on Windows, Mac, Android, and iOS.

 

I'd also argue one of the best options since it requires the least amount of environment setup and work (relatively speaking), Node JS for the back-end. This consolidates the amount of programming languages you use to just one: JavaScript. If you need a database, you can use something like MongoDB and a Node JS library to access it.

Link to comment
Share on other sites

Link to post
Share on other sites

As long as it remains unclear what the "backend" is and what it should do, there is no way to answer any of the questions. Please, dear OP, be more specific and ignore all answers given until this point, especially the one which suggests you to use JavaScript for anything.

Write in C.

Link to comment
Share on other sites

Link to post
Share on other sites

Why learn a new language, there are tools to ease your development like

 

Git

Gulp / grunt

Less

Sass

Coffeescript or something

jQuery

Node.js

Laravel

 

Maybe some of those interest you.

Quote or mention me if not feel ignored 

Link to comment
Share on other sites

Link to post
Share on other sites

On 2017-02-23 at 0:26 PM, M.Yurizaki said:

I'd argue for a web-based application, because you don't have to worry much about system compatibility. You'll get more marketability if you can say your thing can run on Windows, Mac, Android, and iOS.

 

I'd also argue one of the best options since it requires the least amount of environment setup and work (relatively speaking), Node JS for the back-end. This consolidates the amount of programming languages you use to just one: JavaScript. If you need a database, you can use something like MongoDB and a Node JS library to access it.

I agree but that said, you still have to deal with cross browser compatibility however that's still (IMO) easier to handle than a native app (even if you were to use a cross platform framework like QT).

 

To answer the question however: I'm partial to using something like Phoenix (the Elixir web framework) which is similar to Ruby on Rails however that's much more difficult to get started on so I'd go for Ruby on Rails for the backend and have clients (web or native) query that. If you're coming from Java/JS/PHP Ruby will be much easier to get started with. The front end depends on what exactly it is the client wants. If they want a native app on Windows then C# or Java is the way to go (I'm assuming you don't want to learn C++ in order to use something like QT). If they want a webapp then you can choose between the hundreds (exaggerated but it sure feels like that sometimes) of Javascript frameworks or write the HTML/CSS/JS manually which is generally fine especially for smaller projects.

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly if you don't need the scalability of PHP or C++, then Java or C#/VB.NET for the backend/logic is plenty. If you have a hugely data-intense application with real-time updating, then C++ is about your only option, because Java/GoSu/Scala do not scale well and cost you roughly 50% of your performance and up to 20x the memory (which is the most expensive thing in hosting on AWS)

 

http://readwrite.com/2011/06/06/cpp-go-java-scala-performance-benchmark/

http://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=scala&lang2=gpp

https://github.com/kostya/benchmarks

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/23/2017 at 1:14 PM, RGomez9119 said:

Hi,

I have to create a desktop application for a client.

 

I know Java, HTML, CSS, .NET, PHP, JavaScript.

But, if I need to choose another one, I can learn about it quickly.

 

What language is better for the back-end, front-end and all the logic?.

Python is a relatively easy language to learn 

 

However, while the website has been deteriorating as of late, using Sphere or Minishphere may be an option. It uses the core features of Javascript and it's own API to make writing Windows applications rather easy.

 

You can download either one at spheredev.org on their Downloads Drive. link

My procrastination is the bane of my existence.

I make games and stuff in my spare time.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, sgzUk74r3T3BCGmRJ said:

C++ is the only option? PHP is the best choice for scalability? Those are interesting claims.

 

 

I have a side-project I work on with a friend. He handles most of the front-end web stuff and a large part of the API, I tend to provide "infrastructure" support. One of those peices is an "on demand" media generation service. If you have a phone/xbox/browser that needs an image you request it from a service I built. In a way it's like imgur but it exists only to serve a single application, supports many more sizes of images/formats, and is invisible to end users.

 

Here's the throughput charts from last night:TNWXbui.png

Coming in the front door we've got a crapload of requests but Cloudflare is eating about 3/4 of them. At peak load we're doing 5k requests/second to the origins; not the kind of traffic we'd see at our day-jobs, but for a side-project that's pretty damn respectable.

 

For hardware our front-line kit is 2x M4.4xlarge running Varnish. These don't really need to exist but CloudFlare doesn't officially support tiered caching. It makes sense to take some of the load off the origins. We're getting about a 30% hit rate on these by mostly caching "fresh" images. Unfortunately our total image catalog is coming up on half-a-petabyte of "source" images. We considered just pre-computing every possible size and serving directly from S3 but it ends up being more costly than doing it on demand. We have a sort of "long tail" problem and I haven't come up with a good+cost-effective solution. This second tier of layer 7 caching helps keep our average cache miss time < 80ms and it cuts the total number of 'real' request to the back end nodes to around 4k/sec at peak time.

 

When Varnish misses, it load balances across the compute nodes. Our baseline is 4 servers over 3 availability zones. We need 3 servers, but want some measure of extra capacity to handle the time when auto-scaling happens. Boot time is a couple of minutes and performance dives if we run out of CPU. There's some network latency here because we're crossing availability zones but it's managable (<10ms) given the needs of the service.

 

We're using C4.4xlarge for compute nodes and they're typically sitting around 70-80% load with about 1gb of memory used. At low-load times these are < 100mb used. At peak load we're typically running 6 of these, maybe 8 on a very busy day. This is running a one-off Golang 1.6 application on Alpine Linux that is responsible for doing all of the real work.

 

In the interest of full disclosure, the heavy-compute part of the code is a wrapper around a C library (libvips). I'm using it mostly because at the time i wrote it Go didn't have its own image library and I certainly couldn't be assed to write my own jpeg/gif/mpeg/etc. decoders. Writing a wrapper around some C-library saved me a lot of time but it's also well known for having exceptional performance. This one is actually a remarkably slow part of our code path, in part because by the time we're pushing pixels through it, we've already got them in memory. Vips really shines when it's processing images from disk because it has a bunch of techniques it can use to minimize IO; we're side-stepping that because we never do disk-IO.

 

Waiting on assets from EBS/S3 is the dominant part of the algorithm but go makes it easy to handle that without blocking. This wasn't exactly the most challenging thing I've ever had to do, but it convinced me that Golang scales well enough.

 

~5k requests/second with ~100kb response bodies ~150 cores with < 100ms response is respectable no matter how you slice it. I can get better performance in toy-benchmarks out of Elixir, and if I swapped in 'fasthttp' I could probably make the health-check ping respond 1000x faster, but for for real work the standard library + 1 wrapped dependency has served me pretty well.

 

Could I do better in C++?  Well, if you measure in terms of hardware count then at base-load I couldn't. We want redundant servers + at least one "hot spare". There's not much CPU time to be gained by dropping the automatic memory management: It's about 0.01% of request time. Occasionally GC is really slow (like 500 microseconds) so it'd never work for a real-time system but for network applications I'm ready to ignore anything less than 5ms/request so long as throughput isn't suffering.

 

At peak load where we're running up 7 or 8 servers a C implementation might help. Suppose that Golang is half as fast as C. If that were true, then in principle we could save a couple thousand dollars a month because the base-line servers would be able to fully handle peak load. In practice we're more limited by network IO than we are by CPU. It could be that Golangs IO is particularly bad, though I didn't see evidence of that in strace. Supposing I could get a 50% speed up with a C rewrite that's only saving 1-2 servers at peak load and that's just not worth my time.

 

My application binary size would be smaller if it were C, but given the whole program is a 40mb static binary, and that 2/3s of that is the image processing library I'm not sure the gains are worth talking about. I don't really need much memory for anything but the caches, and you can't buy compute-heavy servers without at least 10x more ram than I want.

 

It could be that I'm a terrible C programmer or a prodigy at Go (I don't think either of those are true, but let's just say…), still I feel like a C solution would be at least 2-3x larger (from ~3k to ~10k lines of code including tests). I also feel like my C version would probably have to sit behind something like NGinx for TLS as doing that yourself is hazard fraught at the best of times. That's fine, but it would dramatically increase the complexity of the solution. Maybe there's something as nice as net/http for C/C++ that I don't know about; I've never really considered exposing C++ directly to the internet before so I have to admit my ignorance. I also suspect I'm more likely to screw up concurrency - especially if I tried to keep doing things like connection or buffer recycling. My biggest issue with the go version was a slow file-descriptor leak which was easy to find and fix once it got into production.

 

I don't know what you mean by "data intense": we've got around 500tb of data, and do around 1PB of traffic each month, but that doesn't really feel like it'd qualify. On the other hand, it feels like this project is probably right on the edge of what you'd want to do while generating an HTTP response. If I had more time and money I'd probably use a different architecture, probably something wrapped around a kafka pipeline.

 

Of course, maybe I'm all wrong on this: can you share some of your experiences with "mid-sized" (say 1-10k reqs/second) C/C++ web applications? I've simply got zero experience there, and I don't know anybody who does.

 

 

 

 

Mind you what you're doing isn't compute intensive in the slightest. Media conversion is handled by fixed-function hardware even on CPUs, and especially on AWS motherboards which are loaded with all sorts of accelerators these libraries are aware of. And yes, your I/O is your bottleneck. All you're doing is taking an image, compressing/converting it, and sending it over the wire.

 

In my case we have a claims center and analytics engine all spun up into one massive 82GB system. We just upgraded the entire Gosu front end into a C++ MVC project and tore off 10GB of memory usage. The analytics backend is all based in Scala which is half as fast as good C++ in the best case. much slower in others.

 

There is a reason Google's codebase is overwhelmingly C++ with raw C as a distant second (150 million LOC monolithic C++ codebase). While GoLang is great for system-level rapid prototyping, it just cannot keep up with good C/C++, and everything that GoLang is good at is being obsolesced by Rust (C++'s in-fashion younger sibling) because, once again, it just beats the snot out of it in performance, so it all comes down to what libraries we have, not the features of the base language.

 

If you had direct control over your network hardware and put a proper 20x10Gbit/s switch on the front end of that, you'd find you'd only have a hope of saturating that by going to C/C++. Your CPUs themselves would become the bottleneck long before your network infrastructure would.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×