Computer Science and Engineering has been a lucrative stream to many aspiring engineering students in India. But once they join an engineering institution to study it, the course structure, regularities and trivialities sway them away from the actual goal. To begin with, most of them don’t even master a single programming language in the first year. Till the third year they don’t have any idea of the practical application areas of Computer Science and Engineering. When it comes to the grand final project, most of them are mugging up textbooks and preparing for GATE or examinations (which is good but not at the expense of the learning from the final project) or routine interview procedure. Many of them end up copying or even buying the final project. When they leave college they are just balloons of technology waiting to be burst aloud. I have seen the same even during my MS course from one of the première institutes of India: this course has mandatory papers each for database, data mining and data warehousing for some godforsaken reason; as if software systems are just about writing database applications. The irony – Data Structures and Algorithms are rushed through in a single paper!
A major restructuring of both the course and the outlook towards engineering education is essential to produce computer engineers en masse who actually know what the subject stands for:
- Basics first. Pay for faculties who have a good hold of the subjects they teach, enough to demonstrate practical applications of some of their significant aspects. Examples and explanations matter.
- Keep the core subjects like Data Structures, Algorithms, Compiler Design, Computer Architecture compulsory. Make subjects rather loosely coupled with Computer Science and Engineering optional. In my experience I have seen many suckers who used to score huge numbers in those but couldn’t handle thread priorities even in the final year. Subjects like Mechanical Engineering and Workshop, Operations Research, Control Systems and Electrical Circuits should be optional. If a student is interested he will opt for it. Add more optional subjects like AI or Embedded Systems or Genetic Programming and even Game Design.
- A programming language of choice (and still having relevance) should be compulsory in the first semester and make sure they can write a small daemon with it which can service asynchronous requests.
- Give practical assignments and spread those throughout the year. Explain the assignments first. And oh yes, practical assignments do not mean develop a full game in a week’s time. It means RESEARCH and develop strong algorithms for any of the tricky aspects of it and implement it. When you teach object-oriented, ask them to break it into classes. Search Google and take a look at assignment papers from première institutes of the world.
- Throughout these course years, interest them enough to keep themselves continuously busy with real and useful projects, not just semester results, like contributing in collaborative real projects going on. And award some credit at the end of the final year for that. As a start, once they are done with a programming language, give them the URL to SourceForge. Many of them would handle it themselves from there.
- Each student is different. When you take money from each of them, it is your responsibility to find out their areas of interest. Yes, for each one of them. Put groups of students under each faculty from the first year to find it out. And order your faculty not to do politics with students to start a project under them around the third year. If a student is interested in creating web pages, help him nurture and cultivate his interest. He may one day come up with something like the Wikipedia. Show them the possibilities and encourage them to do what they love to do.
- Stop encouraging them to join external coaching centres to learn .NET and JAVA. For true knowledge which they need, those are rotten useless tools. Read the third point above. They pay you to learn how to write those, not merely use.
I accept the reality: opportunities for good and real applications of Computer Engineering are very less in India and the huge number of students passing out each year makes it worse. When they graduate most students dream of doing amazing things but unfortunately they are not skilled enough and they are not strengthened enough to hold on to those dreams for long. Most of them adapt with maintaining decade old stagnant applications for years and lose interest in even trying something new or enhancing their existing skills (and I am not thinking paid ‘professional’ courses and interview preparations for switching) because the institutions fail to instill that interest in them at the right time. In most cases, their thirst for technology dies out in 2 years tops and this huge energy vanishes in the trivialities of technological oblivion.
With the current LinuxCon on its last day and an enthusiastic as well as overwhelming participation by the largest of the tech giants (Intel, IBM, Cisco, Samsung, HP, to name a few), it’s quite clear that the latest surge in the open source development is not going to decline any soon. Open source has never been this powerful before and it is continuing to grow. While historically Microsoft has dominated the desktop market, right now, it doesn’t have any strong answer to open source alternatives. While there was a time when you could get away with hiding your source code and users were happy just using just the service, today more and more users are getting aware of the concerns of using something that they can’t control; for which they always need to go back to the vendor when they face problems. They are realizing that they are paying for something which they themselves could have handled if they knew how it works or they could have got help from others who know how it works… for free! Today, hardware companies like Intel, NVIDIA are weighing options like which one to support – Wayland or Mir (both open source) as a part of their business strategy. Things did change!
This brings forth a vital question on growth in individual consumer adoption vs. mass adoption of organizations like governments. The latter is slow though there is a recent change in the trend. Germany, Argentina, many countries in Africa etc. are changing it. The respective governments have started recognizing open source as a viable and better alternative to closed source software. There are many reasons behind that:
- open source has many more alternatives today, if you don’t like one, use another
- you don’t just use it, at some point you start contributing and making it suit your needs better
- you don’t pay fortunes in support, you get help from active and strong communities
- your IT experts can verify that your data is not being sniffed at
- the hardware vendors are supporting open source more actively than they ever did
However, exceptions are many more. While one reason is that governments, just like many individuals out there, are used to closed source products, the second reason is that government money is actually money from the taxes. No one cares about the expenditure as long as the supply is there.
In the developing and third world countries, if a government actually wants to change the state of affairs when it comes to information technology and reaping its benefits, they should head for open source. It can cut costs in the government administrative offices, schools, hospitals, military to a great extent. It can ensure that even the remote schools can have internet facility and at least a few computers with the money saved from not using commercial closed source software.
Security? What security? Security is an illusion on the Internet. Someone somewhere can always peek into your secrets. Careful what you let them see!
And who’s not, you ask… Maybe the subject line would have been more appropriate if I wrote – searching for the best smartphone platform. In reality the top (and with similar hardware) smartphones based on the same platform differ very little in features, performance and looks. The smartphone war has a new dimension now – it’s an war of ecosystems, not just some models. The number of developers, app users, manufacturers are very significant factors when you try to decide which one is the best. Let’s have a look at the current platforms dominating the market today (alphabetically) –
- Android: Google didn’t have to pay anything for the core and Samsung didn’t have to pay anything for a ready to market platform and hence the lower price. While I agree that the price is proportional to the hardware (primarily) and other cost (like design, development, manufacturing) involved, Android could never reach the performance that one can expect on Linux. While the main factor is the UI framework, even the native C++ browser feels much slower than the iPhone’s browser. Contrary to common illusion – Google has produced many mediocre products in its history.
- iOS: Overpriced and much hyped. I mean, come on… they don’t even pay the Chinese workers in the manufacturing unit a respectable daily wage. (Not convinced? Read this.) Wish Steve Jobs rests in peace. Their hardware is good but doesn’t count up to what they charge. With more competition from cheaper and reasonably priced platforms like Android, Apple will definitely suffer.
- Windows: I am seeing a BSOD for Windows in the smartphone arena from Surface results. They had a huge advantage from being very much friendly with Intel but now both are on the decline. With the desktop hardware era ending and Intel losing the battle to ARM on smartphone hardware, I find it hard to believe Microsoft would remain more than a service provider in a decade’s time from now. A dying Nokia can’t recharge them either.
What we are seeing here is an opportunity for a platform which will give you a stunning performance at a standard price and can scale. Don’t worry, newer platforms are going to look way cooler, that’s not a concern anymore. When I say scale I mean that you get a desktop and a smartphone together – something that renders the need for touching any other device kaput. There are many hurdles to overcome here, e.g. a regular desktop gamer will always prefer an Alienware to a smartphone, a regular office suite user handling excel sheets will look for a big screen. Another important feature all smartphones are deliberately missing today is – hardware upgrades. It’s a cunning business strategy and it’s a pity people don’t see they are paying huge amounts to get stuck with the same hardware till it’s an outdated waste. Only software upgrades is the most profitable con ever played in the history of consumer electronics. I’m sure some smartphone manufacturer is going to offer hardware upgrades and change the history of smartphones. A smartphone with all these features is yet to arrive and I wish it emerges out of open source software. I have high hopes on initiatives like Ubuntu Mobile, Firefox OS, Tizen, Replicant etc.
I am not going to bore you with – you have a good product and an incomprehensible documentation… blah blah blah. Let’s talk about the case where you do have an excellent product and an excellent (at least that’s what your technical architect thinks) documentation to go along with it. The product sold reasonably well but then, all of a sudden, your developers keep getting bugs from the customers those are often invalid. To your surprise, things are actually explained as far as possible in the documents (I mean it literally). Your developers keep whispering to each other – don’t the customers read the documents?
I realized the problem when I was going through a document of a product. When the developers who wrote the software were working on the inputs for the document, they did their best and provided every possible detail to help out customers. However, the only thing they missed – the documentation was no longer humane. While that goes well with manuals for small utilities where the bulk is much lesser, it doesn’t play well with products.
Your customers are human beings and 90% of them will have no patience to keep reading through the document if you can’t keep them glued. Most people love to learn on the job as they have confidence on their technical capabilities. So they don’t like to read through hundreds of pages of documentation. And when there’s a mismatch between your developers’ reasoning and your customers’ expectation you have a new issue filed.
So how do you avoid it? First things first – don’t make your developers write the documentation. Robots can’t make love as a general rule. So don’t take the risk with your business at stake. Have some expert write it with inputs from them. Next, keep it a jolly document, not a huge list of DOs and DON’Ts. Another wrong notion is – corporate documents must be very formal. NO. Try being a bit informal, add light humor to it occasionally. Finally, try your own potion: make people in the organization read it – testers, other teams, busy managers… and have the feedback.
One of the manuals/documents I absolutely love is man top and I think it’s a great example of how documentation should be. Run quickly through it till the end and you will see what I mean.
Gnome 3 sucks! Somehow they thought they could pull it off like Canonical’s Unity but till now Gnome 3 is a failure. But the reasons discussed by Felipe are not limited to Gnome 3 alone, it has spread to many important open source applications whose developers think each of them is a Linus Torvalds. An older post of mine –
The open source stubbornness at application level
Application level software patents are the most filed patents in the last decade. Just check the records of Apple. When Apple claims it came up with the first smartphone, they should be reminded that they actually integrated many features required daily by individuals on the same device. They didn’t own the patent of email protocol, they were not the first to come up with a video format or display hardware. The same goes for every company (specially companies dealing in smartphones) coming up with software application patents. The truth is – innovation isn’t possible at an application level. Application patents are clever logic but nothing groundbreaking; they are just what the name says – applications. If you consider it as something patentable, then think – had all the algorithms and fundamental tools been patented, today’s application developers couldn’t have written a single line of code without violating patents. Here’s a case study on the innovations Microsoft claims as its own –
Microsoft, the Innovator?
It does add up when you consider the fact that Microsoft was never into hardware, firmware or protocols till recent times.
The basic problem is that most of these companies thriving on application level patents assembled pre-existing ideas, came up with a product which is hugely marketable but got threatened when some other company did the same with their idea, and introduced a cheaper or better product. Patenting here is the instrument to run competition free business. Perhaps the accurate measure for a so called software application patent should be how much marketable the idea is instead of how much patentable it is. The primary goal of these companies is business which thrives through monopoly and popularity. Innovation warrants research. Using a slider to unlock a phone is neither research nor innovation.
Here’s another article one the same subject.
You can hire mediocre programmers to run things slow on a 1GHz processor smartphone, or just use JAVA like Android. Sorry Google!
Read this article in The Hindu today – UID will result in loss of freedoms
I think it’s just a matter of time we see the results of this centralized information storage. There can be different outcomes:
- If used properly, this can help curb crime, no doubt about it. A huge crowd like India with frequent urges to go indisciplined need to be tracked.
- On the other hand, if used incorrectly, this data may end up harming innocent citizens. That’s not a far-fetched idea at all. No one can guarantee that the people handling this data are unquestionable.
- How is all this data secured? How difficult is it to access the data? Regarding the basics, many of the government portals use http instead of https today.
In short – as of today, NO. All the image hosting sites are vulnerable and no matter what they do the only options you are left with are – let users download smaller resolutions of the images or remain content with the superficial security the sites provide which may hold off casual viewers to some extent. I posted once on how to view the images in higher resolution in Picasa (not sure if it works anymore). Similarly hosting sites like SmugMug can’t stop viewers from downloading the images using Firefox Page Info. Browsers still remain the greatest caveat in data security and privacy. It’s definitely a matter of great concern for professional photographers who don’t want to give away their high-resolution originals.
Thinking of it, a possible solution not to give away your original uploaded images seem to be lying in the browser. Today a browser gives away the link to your original image file no matter how much you try to hide it behind javasripts or otherwise. But what if it was possible to request the browser not to link to the original image file but to render it behind a semi-opaque frame (so that screenshots don’t give away the original quality) after reading the image information as encrypted binary data?