Category: Articles

IPhone X Face Recognition and the sensitivity of the biometric data

IPhone X Face Recognition and the sensitivity of the biometric data

What is iPhone face recognition?

The upcoming iPhone X will use Face ID, technology that unlocks your iPhone X by using infrared and visible light scans to uniquely identify your face.

What is Face ID?

Face ID a form of biometric authentication. Rather than a password (something you know) or a security dongle or authentication app (something you have), biometrics are something you are. Fingerprint recognition is also a biometric.

Instead of one or more fingerprints, as with Touch ID, Face ID relies on the unique characteristics of your face.

How does it work?

  • Initially scan your face accurately enough to recognize it later.
  • Compare a new scan with the stored one with enough flexibility to recognize you nearly all the time.
  • Scan your face in a wide variety of lighting conditions.
  • Update your facial details as you age, change hairstyles, grow a mustache, change your eyebrows, get plastic surgery, and so forth to still recognize you.
  • Let you wear hats, scarves, gloves, contact lenses, and sunglasses, and still be recognized.
  • Not allow a similar-looking person, a photograph, a mask, or other techniques to unlock your phone.

Apple uses a combination of infrared emitter and sensor (which it calls TrueDepth) to paint 30,000 points of infrared light on and around your face and also captures flat or 2D infrared snapshots. For the points, the reflection is measured, which allows it to calculate depth and angle from the camera for each dot and construct a depth map.

Live depth mapping is also used for live tracking for Animoji, the talking animals heads—and piles of poo—that match your facial expressions and lip movement, and other selfie special effects, and is provided to third-party developers. But live depth mapping doesn’t offer up raw sensor data that would let a developer re-create a Face ID map.

Where biometric data are kept and how secure is it, is there any violation to Privacy laws in this technology?

Apple’s description of enrollment and comparison is very similar to Touch ID. The enrollment sends data through a one-way channel to the Secure Enclave, a special tamper-resistant chip bound deeply inside the iPhone and iPad architecture that can only respond with limited information, such as confirming a match was made when unlocking for Apple Pay and the like. Secure Enclave also stores some other private information.

As a result, Apple doesn’t collect this information and process it centrally, nor does it store it on the device in a manner that can be retrieved by cracking a phone, a phone backup, or intercepting information to and from it.

However, the concern remains that, with proprietary technology under the control of Apple, a government could force changes that would pass or extract facial identification information, or perform comparisons with faces that a government is looking for.

In the current hardware architecture, however, that seems unlikely. Apple has engineered its systems so that there’s no reasonable way to rework it to change the flow of facial (or, with Touch ID, fingerprint) information to a different source. It would have to create a whole new kind of phone and new firmware.

As an off note; till now there is no 100% guarantee that biometric data will be 100% secured and can’t be hacked a third party applications if the phone lands in the wrong hands.

Pros & cons of face recognition?

Pros:

  • Improve the speed and convenience of daily activities for consumers, like shopping, where a customer could simply look into a camera, have their face identified , and pay for a transaction in a matter of moments.
  • It can also help with surveillance, having the ability to monitor thousands of faces from security cameras.

Cons:

  • Security part of this technology would be that people’s every movement could be tracked without their consent.  This could also possibly be used by the government to identify and follow political opponents, which would cause a great amount of conflict in a country.

 

 

Why Negotiations Fail?

Why Negotiations Fail?

  1. Mismanagement of expectations

Imagine going to a pizza shop and then being told it only serves sushi; disappointment is likely. The same goes for negotiations. If expectations aren’t managed properly, disappointment or frustration may ensue from a misalignment of expectations and reality, and may result in a less-than-ideal outcome for one or all parties.

Properly managing expectations comes from preparation and flexibility. If a party has done its homework –including understanding past precedents and current alternatives—that party is much more likely to have realistic expectations for its encounters. In addition, acknowledging that things may not go as planned can lead to preparation of alternative and backup plans. These plans must lay out acceptance and walk-away scenarios prepared before negotiations begin.

 

  1. Unwillingness to empathize

It’s like watching someone go fishing and not realizing that some people may enjoy fishing. Often, people do not consider the other side’s points of view and cannot appreciate that the other side has different needs and desires, which have a big impact on how negotiations are approached.

By considering the other side’s goals, needs, and thought processes, a negotiator will be able to anticipate arguments the other party may make and consider alternatives that the other party may find appealing even before they meet. In addition, understanding and acknowledging the other side’s point of view may improve the rapport between the parties and can have a positive impact on long-term relationships.

 

  1. Lack of preparation

Have you ever gone to the grocery store without knowing what you already had at your house, only to end up getting more of things you don’t need and less of what you do? Going into any deal without a knowledge of the negotiation’s landscape and potential traps can be treacherous for a negotiator and inhibit proper management of expectations.

A negotiator should come in knowing what relevant precedents exist for the current negotiation, what alternatives may be available (or currently unavailable), and what curveballs may be thrown during the conversation. Preparation in the form of a checklist can be especially helpful as a visual representation of what the negotiator has done, is doing, and needs to do in order to fully prepare for the negotiation. This detailed preparation will make the negotiator more flexible, confident, and purposeful than coming in with only a vague idea of what to expect.

Risk Management Failure – Why does it happen?

Risk Management Failure – Why does it happen?

 

Big projects more often fail because of poor evaluation than poor execution. But many organizations focus on improving only the latter. As a result, they don’t identify the projects that pose the greatest risks of delay, budget overruns, missed market opportunities or, sometimes, irreparable damage to the company and careers, until it’s too late.

It would be easy to point the finger at poor execution. After all, problems occurs as cost or schedule is run over. But overruns are just symptoms of the real problem: poor estimation of project size and risk. As a result, organizations take on projects they shouldn’t and under-budget those they should. H Here are the two drivers of failure and how to avoid them.

  1. Scope and Risk Estimates Are Sourced from Project Advocates.

In a project’s earliest stages, very little is known about what it will take to execute it. So most companies seek out expert internal opinions — usually from stakeholders of the project, since they have the most knowledge around the project. The problem is bias. Research is clear that project stakeholders are likely to fall under its influence, favor optimistic outcomes and produce dangerously inaccurate estimates.

One simple way to expose the impact of bias is with something we call the project estimate multiplier (PEM). It’s simply a comparison of average actual costs vs. average original estimates. The larger the PEM, the greater the impact bias has on your estimating function.

  1. “Big Bet” Risks Are Evaluated the Same Way Smaller Projects Are.

According to research, the risk of failure is four times greater for large projects, and they exceed their already-large budgets by three times as much as smaller projects — enough to cost jobs, damage careers and even sink companies.

Most companies can accurately estimate small projects that may take, say, three to six months, but they are unfortunate at estimating the time and cost of big ones. There are three key reasons for that.

First, large projects usually involve many interdependent tasks, which creates complexity that smaller projects do not have. That makes large projects prone to uncertainty and random events, so they can’t be estimated in the traditional way. Risk-adjusted techniques, such as Monte Carlo analysis, are significantly more accurate.

Second, large projects usually involve tasks whose difficulty is unknown at the time of estimation.

Third, the tipping point between low-risk and high-risk projects is sudden, not gradual. Once a project passes the tipping point, the risk curve changes quickly and dramatically. Unfortunately, under the influence of bias, many companies fail to see the curve, much less correct for it, until it’s too late.

To better assess and manage project risk, develop a process to measure projects against your tipping point. Projects that exceed the tipping point need to be estimated and managed differently. We’ve found the best way is to run the project plan through a series of Monte Carlo simulations. That not only accounts for the risk of uncertainty, it also identifies the tasks with the most risk of affecting the outcome.

The analysis output can then be used to develop a plan for mitigating the risk. This can include techniques like breaking the initiative into smaller projects or running tests to reduce uncertainty.

 

 

Why does the risk management matter?

Why does the risk management matter?

Why does the risk management matter?

The last thing that any project will want to face is risks. Projects are designed to take advantage of resources and opportunities and with these, come uncertainty, challenges and risk. Hence Risk Management becomes a very important key to all project success.

Risk Management: is the project manager’s friend. Done well, it helps you ensure that the ‘appetite for risk’ is appropriately understood at the start; that all risks are agreed upon, prioritised, assessed, communicated and understood in alignment with this ‘risk appetite’; and that you have a solid platform to track agreed actions, including escalation up the management chain if necessary.

Why would you develop a Risk Management Plan?

  • Provide a useful tool for managing and reducing the risks identified before and during the project.
  • Document risk mitigation strategies being pursued in response to the identified risks and their grading in terms of likelihood and seriousness.
  • Provide the Project Sponsor, Steering Committee/senior management with a documented framework from which risk status can be reported upon.
  • Ensure the communication of risk management issues to key stakeholders.
  • Provide a mechanism for seeking and acting on feedback to encourage the involvement of the key stakeholders.
  • Identify the mitigation actions required for implementation of the plan and associated costings.

Risk Management steps:

  • Risk Identification.

It means to engage all the project team to identify what are the possible risks that could happen during the project life cycle.

  • Risk Analysis.

Is to analyse the risks identified based on likelihood and impact and product the Risk matrix.

Risk Matrix:

It is used during risk assessment to define the level of risk by considering the category of probability or likelihood against the category of consequence severity. This is a simple mechanism to increase visibility of risks and assist management decision making.

 

  • Risk Response.

Once we identify where the risk lies in the matrix then we can identify the best response which can be any of the below:

  • Avoid – eliminate the threat by eliminating the cause
  • Mitigate – Identify ways to reduce the probability or the impact of the risk
  • Accept – Nothing will be done
  • Transfer – Make another party responsible for the risk (buy insurance, outsourcing, etc.)

 

  • Risk Monitoring & controlling

The level of risk on a project will be tracked, monitored and reported throughout the project life cycle.

A “Top 10 Risk List” will be maintained by the project team and will be reported as a component of the project status reporting process for this project.

All project change requests will be analysed for their possible impact to the project risks.

Management will be notified of important changes to risk status as a component to the Executive Project Status Report.

 

 

The triple constraint in Project Management

The triple constraint in Project Management

The triple constraint in Project Management

They are the key attributes that must be handled effectively for successful completion and closure of any project.

These constraints are: TIME…..COST…..SCOPE

If we consider the below scenario:

(Note X, Y & Z are just used for explanation and they represent the units of each e.g: Z TIME = 5 days, X Tasks = 30 tasks & Y Cost = $1000 ->  that means in order to deliver the 30 tasks in 5 days you need $1000 as a cost, etc…)

The below figure represent a triangle that has 2 main dimensions (X, Y & Z)

  • If Z TIME is the required to deliver X Tasks using Y Cost “Budget

THEN

  • Y COST is needed to deliver X Tasks in Z Time

THEN

  • X TASKS can only be delivered if I have Y Cost and in Z Time
  • Quality is the volume of the triangle – To keep the quality the same then any change in one of the triangle dimensions need to have some adjustment in the other 2 dimensions

system requirements document in story form example

system requirements document in story form example

the humble story of Tafe course enquirer

 

A long time ago, there was this man TAFE who was sitting on a bench. You can see the sadness in his eyes by his troubled past.

 

“I’m sorry TAFE we know what you’ve done for us, but this is the end. We are going public; we will look for other people as well”

 

Voices of the past echoes in his head. After everything he has done, after everything he has given, in the end it didn’t matter.

 

“for almost 2 centuries, my family have given 500,000 enrolments, 1000 courses and this is what I get? This is not fair!”

 

He shouted in to the air hoping people will hear his agony but no one was there to listen. He laughs off… like a maniac, wondering his eyes in the disbelief of change. He knows that he has to go private now that things have changed. But was not confident he can keep up with the competitions around. With their shiny promises to lure people in. He gets up from his chair and wonders through the park where he hears a sudden voice.

 

“looks like you’re having a bad day TAFE”

 

He looks around and sees an old man sitting at a bench shuffling tarot cards on his hands. He moves to the bench and sits next to the old man. He does not know why but somehow this Oldman has irresistible charm.

 

“pick three cards”

 

The old man said. TAFE upon hearing this picked three cards.

 

As he looked at the first card with a sword crossing.

“Hmm… looks like you are worried about competitions”

 

As he looked at the second card with an anchor.

“with your way of doing things you don’t seem to have much chance on people asking you questions, finding out what you can offer or keep in contact”

 

As he looked at the third card with a wall.

“It looks like your customers need to go through a wall of obstacles to find what they need through you”

 

TAFE was stunned. The cards were spot on about the worries he has been having.

“pick another three cards”

 

TAFE’s hands automatically moved towards the cards. As he tried to pick the cards the old man grabbed TAFE’s hands.

“maybe I’ll just show you”

 

A vision went through TAFE’s eyes. He did not understand anything until he looked at the calendar. TAFE was in the future. He seemed to have done well for himself. The way he is doing things now seems way different from how he did things before. TAFE was curious how, he quickly jumped in to his computer and searched for his website. When he accessed it, he saw something amazing. The search engine has been replaced so that people can search it by keywords like google. After coming this far he got more curious so he searched favourite course. There he found a subscription button. He pressed the button and an email popped up on his phone showing his subscription.

 

“brilliant!!”

He shouted, he then browsed some more and found an enquiry button.

 

“what’s this?”

TAFE asked himself. He clicked the button and it was a text message box.

 

“this is it?”

TAFE was surprised, the system was stripped to the bare minimum that he didn’t think was possible. He then submitted an enquiry and got a respond in record time. TAFE was marvelled at his own system when suddenly he hears a voice at the door.

 

“enjoying yourself, TAFE?”

The old man was standing on the door, only this time he looked familiar as if like an old friend.

 

“some fortune teller you are… who are you?”

The old man answered

“I am your future, no more will we bounded by EBS and its shackles of documents. No more we will be bound by long and boring process. No more we will be scared of competitions…”

The old man spoke and he stared into TAFE’s eyes and said.

 

“my name is TAFECourseEnquirer”

At that time TAFE woke up from his bench. He noticed he never moved around the park in the first place. It all seemed like a dream.

 

“it doesn’t have to be a dream”

He muttered to himself as he stood up and walked on eyes full of determination.

Why thinking of information security makes sense?

Why thinking of information security makes sense?

To fully understand why information security is important, we need to understand both the value of information and the consequences of such information being compromised.

The Value of Information

To understand the value of information, let’s start by examining some typical information held by both businesses and individuals. At the very least, businesses will hold sensitive information on their employees, salary information, financial results, and business plans for the year ahead. They may also hold trade secrets, research and other information that gives them a competitive edge. Individuals usually hold sensitive personal information on their home computers and typically perform online functions such as banking, shopping and social networking; sharing their sensitive information with others over the internet.

As more and more of this information is stored and processed electronically and transmitted across company networks or the internet, the risk of unauthorised access increases and we are presented with growing challenges of how best to protect it.

Protecting Information

When you leave your house for work in the morning, you probably take steps to protect it and the contents from unauthorised access, damage and theft (e.g. turning off the lights, locking the doors and setting the alarm). This same principle can be applied to information – steps must be put in place to protect it. If left unprotected, information can be accessed by anyone. If information should fall into the wrong hands, it can wreck lives, bring down businesses and even be used to commit harm. Quite often, ensuring that information is appropriately protected is both a business and legal requirement. In addition, taking steps to protect your own personal information is a matter of privacy retention and will help prevent identity theft.

Information Breaches

When information is not adequately protected, it may be compromised and this is known as an information or security breach. The consequences of an information breach are severe. For businesses, a breach usually entails huge financial penalties, expensive law suits, loss of reputation and business. For individuals, a breach can lead to identity theft and damage to financial history or credit rating. Recovering from information breaches can take years and the costs are huge.

For example, a well published information breach occurred at the popular clothing company in AUS during 2014-2015, when over 45 million credit/debit cards and nearly 500,000 records containing customer names and driver’s license numbers were compromised. This information is believed to have been compromised due to inadequate protection on their wireless networks, leaving the information exposed. The final costs of the breach are expected to run into the $100’s of millions and possibly over $1 billion.

So, that is why thinking of information security makes sense.

Sensitive data ,Cloud data and the data privacy

Sensitive data ,Cloud data and the data privacy

Sensitive Data and Data privacy
Sensitive information is data that must be protected from unauthorized access to safeguard the privacy or security of an individual or organization.
There are three main types of sensitive information:
Personal information: Sensitive personally identifiable information (PII) is data that can be traced back to an individual and that, if disclosed, could result in harm to that person. Such information includes biometric data, medical information, personally identifiable financial information (PIFI) and unique identifiers such as passport or Social Security numbers. Threats include not only crimes such as identity theft but also disclosure of personal information that the individual would prefer remained private. Sensitive PII should be encrypted both in transit and at rest.
Business information: Sensitive business information includes anything that poses a risk to the company in question if discovered by a competitor or the general public. Such information includes trade secrets, acquisition plans, financial data and supplier and customer information, among other possibilities. With the ever-increasing amount of data generated by businesses, methods of protecting corporate information from unauthorized access are becoming integral to corporate security. These methods include metadata management and document sanitization.
Classified information: Classified information pertains to a government body and is restricted according to level of sensitivity (for example, restricted, confidential, secret and top secret). Information is generally classified to protect security. Once the risk of harm has passed or decreased, classified information may be declassified and, possibly, made public.
Data privacy, also called information privacy, is the aspect of information technology (IT) that deals with the ability an organization or individual has to determine what data in a computer system can be shared with third parties.
Information technology is typically seen as the cause of privacy problems, there are also several ways in which information technology can help to solve these problems. There are rules, guidelines or best practices that can be used for designing privacy-preserving systems. Such possibilities range from ethically-informed design methodologies to using encryption to protect personal information from unauthorized use.
There are also several industry guidelines that can be used to design privacy preserving IT systems. The Payment Card Industry Data Security Standard (see PCI DSS v3.0, 2013, in the Other Internet Resources), for example, gives very clear guidelines for privacy and security sensitive systems design in the domain of the credit card industry and its partners (retailers, banks). Various International Organization for Standardization (ISO) standards (Hone & Eloff 2002) also serve as a source of best practices and guidelines, especially with respect to security, for the design of privacy friendly systems.
Although data privacy and data security are often used as synonyms, they share more of a symbiotic type of relationship. Just as a home security system protects the privacy and integrity of a household, a data security policy is put in place to ensure data privacy. When a business is trusted with the personal and highly private information of its consumers, the business must enact an effective data security policy to protect this data. The following information offers specific details designed to create a more in depth understanding of data security and data privacy.
Data security is commonly referred to as the confidentiality, availability, and integrity of data. In other words, it is all of the practices and processes that are in place to ensure data isn’t being used or accessed by unauthorized individuals or parties. Data security ensures that the data is accurate and reliable and is available when those with authorized access need it. A data security plan includes facets such as collecting only the required information, keeping it safe, and destroying any information that is no longer needed. These steps will help any business meet the legal obligations of possessing sensitive data.
Data privacy is suitably defined as the appropriate use of data. When companies and merchants use data or information that is provided or entrusted to them, the data should be used according to the agreed purposes. The Federal Trade Commission enforces penalties against companies that have negated to ensure the privacy of a customer’s data. In some cases, companies have sold, disclosed, or rented volumes of the consumer information that was entrusted to them to other parties without getting prior approval.

Cloud Data Privacy
Cloud computing is one of the most important current trends in the field of information and communications technology, and ICT management. Elements of cloud computing are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). The term cloud computing derives from the cloud symbol usually used to represent the internet and the complex infrastructure behind it in graphics.
Hardware and software are no longer procured and operated by users themselves but obtained as services. Cloud service providers enable users to access and use the necessary ICT resources via the internet. To provide these resources, providers often fall back upon other providers in the cloud, which, for example, make storage capacity available for customer data or computer capacity for data processing.
Cloud computing services are used both by consumers as well as by organisations and companies. Offers in cloud computing comprise, among other things, the provision of calculating and storage capacity; the provision and operation of development environments and of operating- and database-management systems; of web hosting; of web mail services; and of a growing number of different types of application software; for word processing and other office applications; customer relationship management; supply chain management; or for the storage and management of photos or personal health-related data (electronic health records), to name a few.
There are several steps you can, and should, take to ensure the security of your corporate data when moving to the cloud.
The issue of data privacy is at the forefront of everybody’s mind. Television commercials advertise security products and news programs frequently describe the latest data breach. Public perception aside, any organization has a legal obligation to ensure that the privacy of their employees and clients is protected.
Laws prohibit some data from being used for secondary reasons other than the purpose for which it was originally collected. You can’t collect data on the health of your employees, for example, and then use it to charge smokers with higher insurance premiums. Also, you can’t share certain data with third parties. In the world of cloud computing, this becomes much more difficult, as you now have a third party operating and managing your infrastructure. By its very nature, that provider will have access to your data.
If you’re collecting and storing data in the cloud and it’s subject to the legal requirements of one or more regulations. You must ensure the cloud provider protects the privacy of the data in the appropriate manner. In the same way as data collected within your organization, data collected in the cloud must only be used for the purpose for which it was initially collected. If the individual specified that data be used for one purpose, that assurance must be upheld.
Privacy notices often specify that individuals can access their data and have it deleted or modified. If the data is in a cloud provider’s environment, privacy requirements still apply and the enterprise must ensure this is allowed within a similar timeframe as if the data were stored on site. If data access can only be accomplished by personnel within the cloud provider’s enterprise, you must be satisfied they can fulfill the task as needed.
If you’ve entered into a click-wrap contract, you’ll be constrained to what the cloud provider has set out in these terms. Even with a tailored contract, the cloud provider may try to limit the data control to ensure its clients have a unified approach. This reduces the cloud provider’s overhead and the need to have specialized staff on hand. If complete control over your data is a necessity, you need to ensure this upfront and not bend to a cloud provider’s terms.
Cloud security architecture is effective only if the correct defensive implementations are in place. An efficient cloud security architecture should recognize the issues that will arise with security management.

Deterrent controls
These controls are intended to reduce attacks on a cloud system. Much like a warning sign on a fence or a property, deterrent controls typically reduce the threat level by informing potential attackers that there will be adverse consequences for them if they proceed. (Some consider them a subset of preventive controls.)

Preventive controls
Preventive controls strengthen the system against incidents, generally by reducing if not actually eliminating vulnerabilities. Strong authentication of cloud users, for instance, makes it less likely that unauthorized users can access cloud systems, and more likely that cloud users are positively identified.

Detective controls
Detective controls are intended to detect and react appropriately to any incidents that occur. In the event of an attack, a detective control will signal the preventative or corrective controls to address the issue.[8] System and network security monitoring, including intrusion detection and prevention arrangements, are typically employed to detect attacks on cloud systems and the supporting communications infrastructure.

Corrective controls
Corrective controls reduce the consequences of an incident, normally by limiting the damage. They come into effect during or after an incident. Restoring system backups in order to rebuild a compromised system is an example of a corrective control.

Overengineering in IT projects

Overengineering in IT projects

Software developers throw around this word, “overengineering,” “That code was overengineered.” “This is an overengineered solution.” Strangely enough, though, it’s hard to find an actual definition for the word. People are always giving examples of overengineered code, but rarely do they say what the word actually means.

The dictionary just defines it as a combination of “over” (meaning “too much”) and “engineer” (meaning “design and build”). So per the dictionary, it would mean that you designed or built too much.

Designed or built too much?  And isn’t design a good thing?

Well, yeah, most projects could use more design. They suffer from underengineering. But once in a while, somebody really gets into it and just designs too much. Basically, this is like when somebody builds an orbital laser to destroy an anthill. An orbital laser is really cool, but it (a) costs too much (b) takes too much time and (c) is a maintenance nightmare. I mean, somebody’s going to have to go up there and fix it when it breaks.

The tricky part is–how do you know when you’re overengineering? What’s the line between good design and too much design?

When your design actually makes things more complex instead of simplifying things, you’re overengineering.  An orbital laser would hugely complicate the life of somebody who just needed to destroy some anthills, whereas some simple ant poison would greatly simplify their life by removing their ant problem (if it worked).

This isn’t to say that all complexity is caused by overengineering. In fact, most complexity is caused by underengineering. If you have to choose, the safer side to be on is overengineering. But that’s kind of like saying it’s safer to be facing away from an atom bomb blast than toward it. It’s true (because it protects your eyes more), but really, either way, it’s going to suck pretty bad.

The best way to avoid overengineering is just don’t design too far into the future. Overengineering tends to happen like this: Somebody imagined a requirement that they had no idea whether or not was actually needed. They designed too far into the future, without actually knowing the future.

Now, if that developer really did know he’d need such a system in the future, it would be a mistake to design the code in such a way that the system couldn’t be added later. It doesn’t need to be there now, but you’d be underengineering if you made it impossible to add it later.

With overengineering, the big problem is that it makes it difficult for people to understand your code. There’s some piece built into the system that doesn’t really need to be there, and the person reading the code can’t figure out why it’s there, or even how the whole system works (since it’s now so complicated). It also has all the other flaws that designing too far into the future has, such as locking you into a particular design before you can actually be certain it’s the right one.

There are lots of common ways to overengineering. Probably the most common ways are: making something extensible that won’t ever need to be extended, and making something way more generic than it needs to be.

A good example of the first (making something extensible that doesn’t need to be) would be making a web server that could support an unlimited number of other protocols in addition to HTTP. That’s kind of silly, because if you’re a web server, then you’re sending HTTP. You’re not an “every possible protocol” server.

However, in that same situation, underengineering would be not allowing for any future extension of the HTTP standard. That’s something that does need to be extensible, because it really might change.

The issue here is “How likely it is that this thing’s going to change?” If you can be 99.99% certain that some part of your system is never going to change (for example, the letters available in the English language probably won’t be changing much–that’s a fairly good certainty) you don’t need to make that part of the system very extensible.

There are just some things you have to assume won’t change (like, “We will never be serving any other protocol than HTTP”)–otherwise your system just gets too complex, trying to take into account every possible unknown future change, when there probably won’t even be any future differences. This is the exception, rather than the rule (you should assume most things will change), but you have to have a few stable, unchanging things to build your system around.

In addition to being too generic on the whole-program level, individual components of the program can also be too generic. A function that processes strings doesn’t also have to process integers and arrays, if you’re never going to be getting arrays and integers as input.

You don’t have to overengineer in a huge way, either, to mess up your system. Little by little, tiny bits of overengineering can stack up into one huge complex mass.

Good design is design that leads to simplicity in implementation and maintenance, and makes it easy to understand the code. Overengineered design is design that leads to difficulty in implementation, makes maintenance a nightmare, and turns otherwise simple code into a twisty maze of complexity. It’s not nearly as common as underengineering, but it’s still important to watch out for.

Australian Census 2016 Who Is Liable?

Australian Census 2016 Who Is Liable?

It was a dreary day in August, you were trying to finish work early to get home before the Census night deadline or risk paying a fine of $180 or more. You rushed home and sat down in front of the computer, ready to do the survey but only to find out you are unable to access the Census website. Repeatedly, you kept on refreshing your browser hoping for the contents to load but the error message kept on appearing instead like a broken record. There is something obviously wrong and later that night you found out from social media that the Australian Bureau of Statistics (ABS) website has been experiencing outages causing you and the rest of Australia grief. You are less than impressed and immediately blamed ABS for wasting your valuable time. But are they the main culprit for this chaos?

The Australian Bureau of Statistics, per its namesake, is a government agency responsible in gathering data from the public to understand Australia’s inhabitants and its needs. Collected data are analysed and are referenced by the government and the community enabling better planning on social, economic, environmental and population matters. ABS enlisted the tech giant IBM for a staggering contract of $9.6 million to host and manage traffic for the 2016 online Census. Various tests were done prior to the Census being kicked off, however, come Census night the website crashed continually. It was a frustrating incident even for the Malcolm Turnbull, who expressed his disappointment and specifically called out both IBM and ABS. Of course, if you are the client (ABS) your natural instinct is to point the finger towards the direction of the person you are paying to do the job (the service provider, IBM).

So what went wrong? Initial reaction from ABS was inferring to DDoS (Distributed Denial of Service) attack or in plain English, their systems were compromised. But further investigation suggests, their servers could not cope with the surge in traffic from users trying to access the website all at the same time. IBM was dragged through the mud, their integrity tainted and heavy criticisms came left, right and centre because of the Census’s poor design and architecture. In essence, IBM’s lack of due diligence caused them dearly and perhaps future multi-million dollar government deals in pipeline. Though it makes me wonder how a veteran technology company, who has withstood the test of time, messed up substantially and out of all clients, the most bureaucratic…the government!