Day: November 19, 2016

Why thinking of information security makes sense?

Why thinking of information security makes sense?

To fully understand why information security is important, we need to understand both the value of information and the consequences of such information being compromised.

The Value of Information

To understand the value of information, let’s start by examining some typical information held by both businesses and individuals. At the very least, businesses will hold sensitive information on their employees, salary information, financial results, and business plans for the year ahead. They may also hold trade secrets, research and other information that gives them a competitive edge. Individuals usually hold sensitive personal information on their home computers and typically perform online functions such as banking, shopping and social networking; sharing their sensitive information with others over the internet.

As more and more of this information is stored and processed electronically and transmitted across company networks or the internet, the risk of unauthorised access increases and we are presented with growing challenges of how best to protect it.

Protecting Information

When you leave your house for work in the morning, you probably take steps to protect it and the contents from unauthorised access, damage and theft (e.g. turning off the lights, locking the doors and setting the alarm). This same principle can be applied to information – steps must be put in place to protect it. If left unprotected, information can be accessed by anyone. If information should fall into the wrong hands, it can wreck lives, bring down businesses and even be used to commit harm. Quite often, ensuring that information is appropriately protected is both a business and legal requirement. In addition, taking steps to protect your own personal information is a matter of privacy retention and will help prevent identity theft.

Information Breaches

When information is not adequately protected, it may be compromised and this is known as an information or security breach. The consequences of an information breach are severe. For businesses, a breach usually entails huge financial penalties, expensive law suits, loss of reputation and business. For individuals, a breach can lead to identity theft and damage to financial history or credit rating. Recovering from information breaches can take years and the costs are huge.

For example, a well published information breach occurred at the popular clothing company in AUS during 2014-2015, when over 45 million credit/debit cards and nearly 500,000 records containing customer names and driver’s license numbers were compromised. This information is believed to have been compromised due to inadequate protection on their wireless networks, leaving the information exposed. The final costs of the breach are expected to run into the $100’s of millions and possibly over $1 billion.

So, that is why thinking of information security makes sense.

Sensitive data ,Cloud data and the data privacy

Sensitive data ,Cloud data and the data privacy

Sensitive Data and Data privacy
Sensitive information is data that must be protected from unauthorized access to safeguard the privacy or security of an individual or organization.
There are three main types of sensitive information:
Personal information: Sensitive personally identifiable information (PII) is data that can be traced back to an individual and that, if disclosed, could result in harm to that person. Such information includes biometric data, medical information, personally identifiable financial information (PIFI) and unique identifiers such as passport or Social Security numbers. Threats include not only crimes such as identity theft but also disclosure of personal information that the individual would prefer remained private. Sensitive PII should be encrypted both in transit and at rest.
Business information: Sensitive business information includes anything that poses a risk to the company in question if discovered by a competitor or the general public. Such information includes trade secrets, acquisition plans, financial data and supplier and customer information, among other possibilities. With the ever-increasing amount of data generated by businesses, methods of protecting corporate information from unauthorized access are becoming integral to corporate security. These methods include metadata management and document sanitization.
Classified information: Classified information pertains to a government body and is restricted according to level of sensitivity (for example, restricted, confidential, secret and top secret). Information is generally classified to protect security. Once the risk of harm has passed or decreased, classified information may be declassified and, possibly, made public.
Data privacy, also called information privacy, is the aspect of information technology (IT) that deals with the ability an organization or individual has to determine what data in a computer system can be shared with third parties.
Information technology is typically seen as the cause of privacy problems, there are also several ways in which information technology can help to solve these problems. There are rules, guidelines or best practices that can be used for designing privacy-preserving systems. Such possibilities range from ethically-informed design methodologies to using encryption to protect personal information from unauthorized use.
There are also several industry guidelines that can be used to design privacy preserving IT systems. The Payment Card Industry Data Security Standard (see PCI DSS v3.0, 2013, in the Other Internet Resources), for example, gives very clear guidelines for privacy and security sensitive systems design in the domain of the credit card industry and its partners (retailers, banks). Various International Organization for Standardization (ISO) standards (Hone & Eloff 2002) also serve as a source of best practices and guidelines, especially with respect to security, for the design of privacy friendly systems.
Although data privacy and data security are often used as synonyms, they share more of a symbiotic type of relationship. Just as a home security system protects the privacy and integrity of a household, a data security policy is put in place to ensure data privacy. When a business is trusted with the personal and highly private information of its consumers, the business must enact an effective data security policy to protect this data. The following information offers specific details designed to create a more in depth understanding of data security and data privacy.
Data security is commonly referred to as the confidentiality, availability, and integrity of data. In other words, it is all of the practices and processes that are in place to ensure data isn’t being used or accessed by unauthorized individuals or parties. Data security ensures that the data is accurate and reliable and is available when those with authorized access need it. A data security plan includes facets such as collecting only the required information, keeping it safe, and destroying any information that is no longer needed. These steps will help any business meet the legal obligations of possessing sensitive data.
Data privacy is suitably defined as the appropriate use of data. When companies and merchants use data or information that is provided or entrusted to them, the data should be used according to the agreed purposes. The Federal Trade Commission enforces penalties against companies that have negated to ensure the privacy of a customer’s data. In some cases, companies have sold, disclosed, or rented volumes of the consumer information that was entrusted to them to other parties without getting prior approval.

Cloud Data Privacy
Cloud computing is one of the most important current trends in the field of information and communications technology, and ICT management. Elements of cloud computing are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). The term cloud computing derives from the cloud symbol usually used to represent the internet and the complex infrastructure behind it in graphics.
Hardware and software are no longer procured and operated by users themselves but obtained as services. Cloud service providers enable users to access and use the necessary ICT resources via the internet. To provide these resources, providers often fall back upon other providers in the cloud, which, for example, make storage capacity available for customer data or computer capacity for data processing.
Cloud computing services are used both by consumers as well as by organisations and companies. Offers in cloud computing comprise, among other things, the provision of calculating and storage capacity; the provision and operation of development environments and of operating- and database-management systems; of web hosting; of web mail services; and of a growing number of different types of application software; for word processing and other office applications; customer relationship management; supply chain management; or for the storage and management of photos or personal health-related data (electronic health records), to name a few.
There are several steps you can, and should, take to ensure the security of your corporate data when moving to the cloud.
The issue of data privacy is at the forefront of everybody’s mind. Television commercials advertise security products and news programs frequently describe the latest data breach. Public perception aside, any organization has a legal obligation to ensure that the privacy of their employees and clients is protected.
Laws prohibit some data from being used for secondary reasons other than the purpose for which it was originally collected. You can’t collect data on the health of your employees, for example, and then use it to charge smokers with higher insurance premiums. Also, you can’t share certain data with third parties. In the world of cloud computing, this becomes much more difficult, as you now have a third party operating and managing your infrastructure. By its very nature, that provider will have access to your data.
If you’re collecting and storing data in the cloud and it’s subject to the legal requirements of one or more regulations. You must ensure the cloud provider protects the privacy of the data in the appropriate manner. In the same way as data collected within your organization, data collected in the cloud must only be used for the purpose for which it was initially collected. If the individual specified that data be used for one purpose, that assurance must be upheld.
Privacy notices often specify that individuals can access their data and have it deleted or modified. If the data is in a cloud provider’s environment, privacy requirements still apply and the enterprise must ensure this is allowed within a similar timeframe as if the data were stored on site. If data access can only be accomplished by personnel within the cloud provider’s enterprise, you must be satisfied they can fulfill the task as needed.
If you’ve entered into a click-wrap contract, you’ll be constrained to what the cloud provider has set out in these terms. Even with a tailored contract, the cloud provider may try to limit the data control to ensure its clients have a unified approach. This reduces the cloud provider’s overhead and the need to have specialized staff on hand. If complete control over your data is a necessity, you need to ensure this upfront and not bend to a cloud provider’s terms.
Cloud security architecture is effective only if the correct defensive implementations are in place. An efficient cloud security architecture should recognize the issues that will arise with security management.

Deterrent controls
These controls are intended to reduce attacks on a cloud system. Much like a warning sign on a fence or a property, deterrent controls typically reduce the threat level by informing potential attackers that there will be adverse consequences for them if they proceed. (Some consider them a subset of preventive controls.)

Preventive controls
Preventive controls strengthen the system against incidents, generally by reducing if not actually eliminating vulnerabilities. Strong authentication of cloud users, for instance, makes it less likely that unauthorized users can access cloud systems, and more likely that cloud users are positively identified.

Detective controls
Detective controls are intended to detect and react appropriately to any incidents that occur. In the event of an attack, a detective control will signal the preventative or corrective controls to address the issue.[8] System and network security monitoring, including intrusion detection and prevention arrangements, are typically employed to detect attacks on cloud systems and the supporting communications infrastructure.

Corrective controls
Corrective controls reduce the consequences of an incident, normally by limiting the damage. They come into effect during or after an incident. Restoring system backups in order to rebuild a compromised system is an example of a corrective control.

Overengineering in IT projects

Overengineering in IT projects

Software developers throw around this word, “overengineering,” “That code was overengineered.” “This is an overengineered solution.” Strangely enough, though, it’s hard to find an actual definition for the word. People are always giving examples of overengineered code, but rarely do they say what the word actually means.

The dictionary just defines it as a combination of “over” (meaning “too much”) and “engineer” (meaning “design and build”). So per the dictionary, it would mean that you designed or built too much.

Designed or built too much?  And isn’t design a good thing?

Well, yeah, most projects could use more design. They suffer from underengineering. But once in a while, somebody really gets into it and just designs too much. Basically, this is like when somebody builds an orbital laser to destroy an anthill. An orbital laser is really cool, but it (a) costs too much (b) takes too much time and (c) is a maintenance nightmare. I mean, somebody’s going to have to go up there and fix it when it breaks.

The tricky part is–how do you know when you’re overengineering? What’s the line between good design and too much design?

When your design actually makes things more complex instead of simplifying things, you’re overengineering.  An orbital laser would hugely complicate the life of somebody who just needed to destroy some anthills, whereas some simple ant poison would greatly simplify their life by removing their ant problem (if it worked).

This isn’t to say that all complexity is caused by overengineering. In fact, most complexity is caused by underengineering. If you have to choose, the safer side to be on is overengineering. But that’s kind of like saying it’s safer to be facing away from an atom bomb blast than toward it. It’s true (because it protects your eyes more), but really, either way, it’s going to suck pretty bad.

The best way to avoid overengineering is just don’t design too far into the future. Overengineering tends to happen like this: Somebody imagined a requirement that they had no idea whether or not was actually needed. They designed too far into the future, without actually knowing the future.

Now, if that developer really did know he’d need such a system in the future, it would be a mistake to design the code in such a way that the system couldn’t be added later. It doesn’t need to be there now, but you’d be underengineering if you made it impossible to add it later.

With overengineering, the big problem is that it makes it difficult for people to understand your code. There’s some piece built into the system that doesn’t really need to be there, and the person reading the code can’t figure out why it’s there, or even how the whole system works (since it’s now so complicated). It also has all the other flaws that designing too far into the future has, such as locking you into a particular design before you can actually be certain it’s the right one.

There are lots of common ways to overengineering. Probably the most common ways are: making something extensible that won’t ever need to be extended, and making something way more generic than it needs to be.

A good example of the first (making something extensible that doesn’t need to be) would be making a web server that could support an unlimited number of other protocols in addition to HTTP. That’s kind of silly, because if you’re a web server, then you’re sending HTTP. You’re not an “every possible protocol” server.

However, in that same situation, underengineering would be not allowing for any future extension of the HTTP standard. That’s something that does need to be extensible, because it really might change.

The issue here is “How likely it is that this thing’s going to change?” If you can be 99.99% certain that some part of your system is never going to change (for example, the letters available in the English language probably won’t be changing much–that’s a fairly good certainty) you don’t need to make that part of the system very extensible.

There are just some things you have to assume won’t change (like, “We will never be serving any other protocol than HTTP”)–otherwise your system just gets too complex, trying to take into account every possible unknown future change, when there probably won’t even be any future differences. This is the exception, rather than the rule (you should assume most things will change), but you have to have a few stable, unchanging things to build your system around.

In addition to being too generic on the whole-program level, individual components of the program can also be too generic. A function that processes strings doesn’t also have to process integers and arrays, if you’re never going to be getting arrays and integers as input.

You don’t have to overengineer in a huge way, either, to mess up your system. Little by little, tiny bits of overengineering can stack up into one huge complex mass.

Good design is design that leads to simplicity in implementation and maintenance, and makes it easy to understand the code. Overengineered design is design that leads to difficulty in implementation, makes maintenance a nightmare, and turns otherwise simple code into a twisty maze of complexity. It’s not nearly as common as underengineering, but it’s still important to watch out for.

Australian Census 2016 Who Is Liable?

Australian Census 2016 Who Is Liable?

It was a dreary day in August, you were trying to finish work early to get home before the Census night deadline or risk paying a fine of $180 or more. You rushed home and sat down in front of the computer, ready to do the survey but only to find out you are unable to access the Census website. Repeatedly, you kept on refreshing your browser hoping for the contents to load but the error message kept on appearing instead like a broken record. There is something obviously wrong and later that night you found out from social media that the Australian Bureau of Statistics (ABS) website has been experiencing outages causing you and the rest of Australia grief. You are less than impressed and immediately blamed ABS for wasting your valuable time. But are they the main culprit for this chaos?

The Australian Bureau of Statistics, per its namesake, is a government agency responsible in gathering data from the public to understand Australia’s inhabitants and its needs. Collected data are analysed and are referenced by the government and the community enabling better planning on social, economic, environmental and population matters. ABS enlisted the tech giant IBM for a staggering contract of $9.6 million to host and manage traffic for the 2016 online Census. Various tests were done prior to the Census being kicked off, however, come Census night the website crashed continually. It was a frustrating incident even for the Malcolm Turnbull, who expressed his disappointment and specifically called out both IBM and ABS. Of course, if you are the client (ABS) your natural instinct is to point the finger towards the direction of the person you are paying to do the job (the service provider, IBM).

So what went wrong? Initial reaction from ABS was inferring to DDoS (Distributed Denial of Service) attack or in plain English, their systems were compromised. But further investigation suggests, their servers could not cope with the surge in traffic from users trying to access the website all at the same time. IBM was dragged through the mud, their integrity tainted and heavy criticisms came left, right and centre because of the Census’s poor design and architecture. In essence, IBM’s lack of due diligence caused them dearly and perhaps future multi-million dollar government deals in pipeline. Though it makes me wonder how a veteran technology company, who has withstood the test of time, messed up substantially and out of all clients, the most bureaucratic…the government!