Friday, May 25, 2018

General Data Protection Regulation (GDPR) - Data Governance & Compliance - Guidance & Resources

Posting this as I feel that as of the 25th of May 2018... Organisations and data protection professionals could still.. use this to better understand the new Act and ensure compliance.

The previous Data Protection Act, passed a generation ago, failed to account for today’s internet and digital technologies, social media and big data.

The new Act updates existing data protection laws, and sits alongside the General Data Protection Regulation (GDPR) which is also due to take effect starting from the 25th of May. The Act implements the EU Law Enforcement Directive, as well as extending data protection laws to areas which are not covered by the GDPR.

The growing digital economy relies on consumer trust to make it work. The Act, along with the GDPR provides a modernised, comprehensive package to protect people’s personal data in order to build that trust.

Our personal data is a version of each of us – what we’ve done, what we’ve read, where we’ve been and who is in our network. It is our health status, our financial decisions, our political beliefs and affiliations. Our desire to book a flight, update our browser, or sign up for a service should not be governed merely by terms and conditions set by an organisation.  Life is too short to decipher fine print.

The new laws provide tools and strengthened rights to allow people to take back control of their personal data.

The legislation requires increased transparency and accountability from organisations, and stronger rules to protect against theft and loss of data with serious sanctions and fines for those that deliberately or negligently misuse data.

This law is not about fines. It’s about putting the consumer and citizen first. Telling people we can’t lose sight of that.

The creation of the Data Protection Act 2018 is not an end point, it’s just the beginning, in the same way that preparations for the GDPR don’t end on 25 May 2018. From this date, we’ll be enforcing the GDPR and the new Act but we all know that effective data protection requires clear evidence of commitment and ongoing effort.

It’s an evolutionary process for organisations –no business, industry sector or technology stands still. Organisations must continue to identify and address emerging privacy and security risks in the weeks, months and years beyond 2018.

Governed by these laws, organisations will have the incentive and the opportunity to put people at the heart of their data services. Being fair, clear and accountable to their customers and employees, organisations large and small will be able to innovate with the confidence that they are building deeper digital trust.

Guide to the General Data Protection Regulation 

The Guide to the General Data Protection Regulation (GDPR) explains the provisions of the GDPR to help organisations comply with its requirements. It is for those who have day-to-day responsibility for data protection.

Who does the GDPR apply to?

The GDPR applies to ‘controllers’ and ‘processors’.

A controller determines the purposes and means of processing personal data.

A processor is responsible for processing personal data on behalf of a controller.
If you are a processor, the GDPR places specific legal obligations on you; for example, you are required to maintain records of personal data and processing activities. You will have legal liability if you are responsible for a breach.

However, if you are a controller, you are not relieved of your obligations where a processor is involved – the GDPR places further obligations on you to ensure your contracts with processors comply with the GDPR.

The GDPR applies to processing carried out by organisations operating within the EU. It also applies to organisations outside the EU that offer goods or services to individuals in the EU.

The GDPR does not apply to certain activities including processing covered by the Law Enforcement Directive, processing for national security purposes and processing carried out by individuals purely for personal/household activities.

What information does the GDPR apply to?

Personal data

The GDPR applies to ‘personal data’ meaning any information relating to an identifiable person who can be directly or indirectly identified in particular by reference to an identifier.

This definition provides for a wide range of personal identifiers to constitute personal data, including name, identification number, location data or online identifier, reflecting changes in technology and the way organisations collect information about people.

The GDPR applies to both automated personal data and to manual filing systems where personal data are accessible according to specific criteria. This could include chronologically ordered sets of manual records containing personal data.

Personal data that has been pseudonymised – eg key-coded – can fall within the scope of the GDPR depending on how difficult it is to attribute the pseudonym to a particular individual.

Sensitive personal data

The GDPR refers to sensitive personal data as “special categories of personal data” (see Article 9).

The special categories specifically include genetic data, and biometric data where processed to uniquely identify an individual.

Personal data relating to criminal convictions and offences are not included, but similar extra safeguards apply to its processing (see Article 10).

What is personal data?

The GDPR applies to the processing of personal data that is:

wholly or partly by automated means; or

the processing other than by automated means of personal data which forms part of, or is intended to form part of, a filing system.

Personal data only includes information relating to natural persons who:

can be identified or who are identifiable, directly from the information in question; or

who can be indirectly identified from that information in combination with other information.

Personal data may also include special categories of personal data or criminal conviction and offences data. These are considered to be more sensitive and you may only process them in more limited circumstances.

Pseudonymised data can help reduce privacy risks by making it more difficult to identify individuals, but it is still personal data.

If personal data can be truly anonymised then the anonymised data is not subject to the GDPR. It is important to understand what personal data is in order to understand if the data has been anonymised.

Information about a deceased person does not constitute personal data and therefore is not subject to the GDPR.

Information about companies or public authorities is not personal data.

However, information about individuals acting as sole traders, employees, partners and company directors where they are individually identifiable and the information relates to them as an individual may constitute personal data.

What happens when different organisations process the same data for different purposes?

It is possible that although data does not relate to an identifiable individual for one controller, in the hands of another controller it does.

This is particularly the case where, for the purposes of one controller, the identity of the individuals is irrelevant and the data therefore does not relate to them.

However, when used for a different purpose, or in conjunction with additional information available to another controller, the data does relate to the identifiable individual.

It is therefore necessary to consider carefully the purpose for which the controller is using the data in order to decide whether it relates to an individual.

You should take care when you make an analysis of this nature.

GDPR Principles

The GDPR sets out seven key principles:

  • Lawfulness, fairness and transparency
  • Purpose limitation
  • Data minimisation
  • Accuracy
  • Storage limitation
  • Integrity and confidentiality (security)
  • Accountability

These principles should lie at the heart of your approach to processing personal data.

What’s new under the GPDR?   

The principles are broadly similar to the principles in the Data Protection Act 1998 (the 1998 Act).

  1998 Act:                           GDPR:
  Principle 1 – fair and lawful   Principle (a) – lawfulness, fairness and transparency
  Principle 2 – purposes   Principle (b) – purpose limitation
  Principle 3 – adequacy   Principle (c) – data minimisation
  Principle 4 – accuracy   Principle (d) – accuracy
  Principle 5 - retention   Principle (e) – storage limitation
  Principle 6 – rights           No principle – separate provisions in Chapter III
  Principle 7 – security   Principle (f) – integrity and confidentiality
  Principle 8 – international transfers   No principle – separate provisions in Chapter V (no equivalent) Accountability principle

Why are the principles important?

The principles lie at the heart of the GDPR. They are set out right at the start of the legislation, and inform everything that follows. They don’t give hard and fast rules, but rather embody the spirit of the general data protection regime - and as such there are very limited exceptions.

Compliance with the spirit of these key principles is therefore a fundamental building block for good data protection practice. It is also key to your compliance with the detailed provisions of the GPDR.

Failure to comply with the principles may leave you open to substantial fines. Article 83(5)(a) states that infringements of the basic principles for processing personal data are subject to the highest tier of administrative fines. This could mean a fine of up to €20 million, or 4% of your total worldwide annual turnover, whichever is higher.

Accountability and governance 

Accountability is one of the data protection principles - it makes you responsible for complying with the GDPR and says that you must be able to demonstrate your compliance.

You need to put in place appropriate technical and organisational measures to meet the requirements of accountability.

There are a number of measures that you can, and in some cases must, take including:

  • adopting and implementing data protection policies;
  • taking a ‘data protection by design and default’ approach;
  • putting written contracts in place with organisations that process personal data on your behalf;
  • maintaining documentation of your processing activities;
  • implementing appropriate security measures;
  • recording and, where necessary, reporting personal data breaches;
  • carrying out data protection impact assessments for uses of personal data that are likely to result in high risk to individuals’ interests;
  • appointing a data protection officer; and
  • adhering to relevant codes of conduct and signing up to certification schemes.

Accountability obligations are ongoing. You must review and, where necessary, update the measures you put in place.

If you implement a privacy management framework this can help you embed your accountability measures and create a culture of privacy across your organisation.

Being accountable can help you to build trust with individuals and may help you mitigate enforcement action.


☐ We take responsibility for complying with the GDPR, at the highest management level and throughout our organisation.

☐ We keep evidence of the steps we take to comply with the GDPR.

We put in place appropriate technical and organisational measures, such as:

☐ adopting and implementing data protection policies (where proportionate);

☐ taking a ‘data protection by design and default’ approach - putting appropriate data protection measures in place throughout the entire lifecycle of our processing operations;

☐ putting written contracts in place with organisations that process personal data on our behalf;

☐ maintaining documentation of our processing activities;

☐ implementing appropriate security measures;

☐ recording and, where necessary, reporting personal data breaches;

☐ carrying out data protection impact assessments for uses of personal data that are likely to result in high risk to individuals’ interests;

☐ appointing a data protection officer (where necessary); and

☐ adhering to relevant codes of conduct and signing up to certification schemes (where possible).

☐ We review and update our accountability measures at appropriate intervals.


A key principle of the GDPR is that you process personal data securely by means of ‘appropriate technical and organisational measures’ – this is the ‘security principle’.

Doing this requires you to consider things like risk analysis, organisational policies, and physical and technical measures.

You also have to take into account additional requirements about the security of your processing – and these also apply to data processors.

You can consider the state of the art and costs of implementation when deciding what measures to take – but they must be appropriate both to your circumstances and the risk your processing poses.

Where appropriate, you should look to use measures such as pseudonymisation and encryption.

Your measures must ensure the ‘confidentiality, integrity and availability’ of your systems and services and the personal data you process within them.

The measures must also enable you to restore access and availability to personal data in a timely manner in the event of a physical or technical incident.

You also need to ensure that you have appropriate processes in place to test the effectiveness of your measures, and undertake any required improvements.


☐ We undertake an analysis of the risks presented by our processing, and use this to assess the appropriate level of security we need to put in place.

☐ When deciding what measures to implement, we take account of the state of the art and costs of implementation.

☐ We have an information security policy (or equivalent) and take steps to make sure the policy is implemented.

☐ Where necessary, we have additional policies and ensure that controls are in place to enforce them.

☐ We make sure that we regularly review our information security policies and measures and, where necessary, improve them.

☐ We have put in place basic technical controls such as those specified by established frameworks like Cyber Essentials.

☐ We understand that we may also need to put other technical measures in place depending on our circumstances and the type of personal data we process.

☐ We use encryption and/or pseudonymisation where it is appropriate to do so.

☐ We understand the requirements of confidentiality, integrity and availability for the personal data we process.

☐ We make sure that we can restore access to personal data in the event of any incidents, such as by establishing an appropriate backup process.

☐ We conduct regular testing and reviews of our measures to ensure they remain effective, and act on the results of those tests where they highlight areas for improvement.

☐ Where appropriate, we implement measures that adhere to an approved code of conduct or certification mechanism.

☐ We ensure that any data processor we use also implements appropriate technical and organisational measures.

International transfers 

The GDPR imposes restrictions on the transfer of personal data outside the European Union, to third countries or international organisations.

These restrictions are in place to ensure that the level of protection of individuals afforded by the GDPR is not undermined.

When can personal data be transferred outside the European Union?

Personal data may only be transferred outside of the EU in compliance with the conditions for transfer set out in Chapter V of the GDPR.

What about transfers on the basis of a Commission decision?

Transfers may be made where the Commission has decided that a third country, a territory or one or more specific sectors in the third country, or an international organisation ensures an adequate level of protection.

What about transfers subject to appropriate safeguards?

You may transfer personal data where the organisation receiving the personal data has provided adequate safeguards. Individuals’ rights must be enforceable and effective legal remedies for individuals must be available following the transfer.

Adequate safeguards may be provided for by:

  • a legally binding agreement between public authorities or bodies;
  • binding corporate rules (agreements governing transfers made between organisations within in a corporate group);
  • standard data protection clauses in the form of template transfer clauses adopted by the Commission;
  • standard data protection clauses in the form of template transfer clauses adopted by a supervisory authority and approved by the Commission;
  • compliance with an approved code of conduct approved by a supervisory authority;
  • certification under an approved certification mechanism as provided for in the GDPR; 
  • contractual clauses agreed authorised by the competent supervisory authority; or
  • provisions inserted into administrative arrangements between public authorities or bodies authorized by the competent supervisory authority.

What about transfers based on an organisation’s assessment of the adequacy of protection?           

The GDPR limits your ability to transfer personal data outside the EU where this is based only on your own assessment of the adequacy of the protection afforded to the personal data.

Authorisations of transfers made by Member States or supervisory authorities and decisions of the Commission regarding adequate safeguards made under the Directive will remain valid/remain in force until amended, replaced or repealed.

Are there any derogations from the prohibition on transfers of personal data outside of the EU?

The GDPR provides derogations from the general prohibition on transfers of personal data outside the EU for certain specific situations. A transfer, or set of transfers, may be made where the transfer is:

  • made with the individual’s informed consent;
  • necessary for the performance of a contract between the individual and the organisation or for pre-contractual steps taken at the individual’s request;
  • necessary for the performance of a contract made in the interests of the individual between the controller and another person;
  • necessary for important reasons of public interest;
  • necessary for the establishment, exercise or defence of legal claims;
  • necessary to protect the vital interests of the data subject or other persons, where the data subject is physically or legally incapable of giving consent; or
  • made from a register which under UK or EU law is intended to provide information to the public (and which is open to consultation by either the public in general or those able to show a legitimate interest in inspecting the register).

The first three derogations are not available for the activities of public authorities in the exercise of their public powers.

What about one-off (or infrequent) transfers of personal data concerning only relatively few individuals?

Even where there is no Commission decision authorising transfers to the country in question, if it is not possible to demonstrate that individual’s rights are protected by adequate safeguards and none of the derogations apply, the GDPR provides that personal data may still be transferred outside the EU.

However, such transfers are permitted only where the transfer:

  • is not being made by a public authority in the exercise of its public powers;
  • is not repetitive (similar transfers are not made on a regular basis);
  • involves data related to only a limited number of individuals;
  • is necessary for the purposes of the compelling legitimate interests of the organisation (provided such interests are not overridden by the interests of the individual); and
  • is made subject to suitable safeguards put in place by the organisation (in the light of an assessment of all the circumstances surrounding the transfer) to protect the personal data.

In these cases, organisations are obliged to inform the relevant supervisory authority of the transfer and provide additional information to individuals.

Personal data breaches 

The GDPR introduces a duty on all organisations to report certain types of personal data breach to the relevant supervisory authority. You must do this within 72 hours of becoming aware of the breach, where feasible.

If the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms, you must also inform those individuals without undue delay.

You should ensure you have robust breach detection, investigation and internal reporting procedures in place. This will facilitate decision-making about whether or not you need to notify the relevant supervisory authority and the affected individuals.

You must also keep a record of any personal data breaches, regardless of whether you are required to notify.


Preparing for a personal data breach

☐ We know how to recognise a personal data breach.

☐ We understand that a personal data breach isn’t only about loss or theft of personal data.

☐ We have prepared a response plan for addressing any personal data breaches that occur.

☐ We have allocated responsibility for managing breaches to a dedicated person or team.

☐ Our staff know how to escalate a security incident to the appropriate person or team in our organisation to determine whether a breach has occurred.

Responding to a personal data breach

☐ We have in place a process to assess the likely risk to individuals as a result of a breach.

☐ We know who is the relevant supervisory authority for our processing activities.

☐ We have a process to notify the ICO of a breach within 72 hours of becoming aware of it, even if we do not have all the details yet.

☐ We know what information we must give the ICO about a breach.

☐ We have a process to inform affected individuals about a breach when it is likely to result in a high risk to their rights and freedoms.

☐ We know we must inform affected individuals without undue delay.

☐ We know what information about a breach we must provide to individuals, and that we should provide advice to help them protect themselves from its effects.

What is a personal data breach?

A personal data breach means a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data. This includes breaches that are the result of both accidental and deliberate causes. It also means that a breach is more than just about losing personal data.

☐ We document all breaches, even if they don’t all need to be reported.

A personal data breach can be broadly defined as a security incident that has affected the confidentiality, integrity or availability of personal data. In short, there will be a personal data breach whenever any personal data is lost, destroyed, corrupted or disclosed; if someone accesses the data or passes it on without proper authorisation; or if the data is made unavailable, for example, when it has been encrypted by ransomware, or accidentally lost or destroyed.

Recital 87 of the GDPR makes clear that when a security incident takes place, you should quickly establish whether a personal data breach has occurred and, if so, promptly take steps to address it, including telling the ICO if required.

In assessing risk to rights and freedoms, it’s important to focus on the potential negative consequences for individuals. Recital 85 of the GDPR explains that:

“A personal data breach may, if not addressed in an appropriate and timely manner, result in physical, material or non-material damage to natural persons such as loss of control over their personal data or limitation of their rights, discrimination, identity theft or fraud, financial loss, unauthorised reversal of pseudonymisation, damage to reputation, loss of confidentiality of personal data protected by professional secrecy or any other significant economic or social disadvantage to the natural person concerned.”

So, on becoming aware of a breach, you should try to contain it and assess the potential adverse consequences for individuals, based on how serious or substantial these are, and how likely they are to happen.

What role do processors have?

If your organisation uses a data processor, and this processor suffers a breach, then under Article 33(2) it must inform you without undue delay as soon as it becomes aware.

This requirement allows you to take steps to address the breach and meet your breach-reporting obligations under the GDPR.

If you use a processor, the requirements on breach reporting should be detailed in the contract between you and your processor, as required under Article 28. For more details about contracts, please see our draft GDPR guidance on contracts and liabilities between controllers and processors.

How much time do we have to report a breach?

You must report a notifiable breach to the ICO without undue delay, but not later than 72 hours after becoming aware of it. If you take longer than this, you must give reasons for the delay.

Section II of the Article 29 Working Party Guidelines on personal data breach notification gives more details of when a controller can be considered to have “become aware” of a breach.

What information must a breach notification to the supervisory authority contain?

When reporting a breach, the GDPR says you must provide:

  • a description of the nature of the personal data breach including, where possible:
  • the categories and approximate number of individuals concerned; and
  • the categories and approximate number of personal data records concerned;
  • the name and contact details of the data protection officer (if your organisation has one) or other contact point where more information can be obtained;
  • a description of the likely consequences of the personal data breach; and
  • a description of the measures taken, or proposed to be taken, to deal with the personal data breach, including, where appropriate, the measures taken to mitigate any possible adverse effects.

What if we don’t have all the required information available yet?

The GDPR recognises that it will not always be possible to investigate a breach fully within 72 hours to understand exactly what has happened and what needs to be done to mitigate it. So Article 34(4) allows you to provide the required information in phases, as long as this is done without undue further delay.

However, we expect controllers to prioritise the investigation, give it adequate resources, and expedite it urgently. You must still notify us of the breach when you become aware of it, and submit further information as soon as possible. If you know you won’t be able to provide full details within 72 hours, it is a good idea to explain the delay to us and tell us when you expect to submit more information.

When do we need to tell individuals about a breach?

If a breach is likely to result in a high risk to the rights and freedoms of individuals, the GDPR says you must inform those concerned directly and without undue delay. In other words, this should take place as soon as possible.

A ‘high risk’ means the threshold for informing individuals is higher than for notifying the ICO. 

Again, you will need to assess both the severity of the potential or actual impact on individuals as a result of a breach and the likelihood of this occurring. If the impact of the breach is more severe, the risk is higher; if the likelihood of the consequences is greater, then again the risk is higher. In such cases, you will need to promptly inform those affected, particularly if there is a need to mitigate an immediate risk of damage to them. One of the main reasons for informing individuals is to help them take steps to protect themselves from the effects of a breach.

What information must we provide to individuals when telling them about a breach?

You need to describe, in clear and plain language, the nature of the personal data breach and, at least:

  • the name and contact details of your data protection officer (if your organisation has one) or other contact point where more information can be obtained;
  • a description of the likely consequences of the personal data breach; and
  • a description of the measures taken, or proposed to be taken, to deal with the personal data breach and including, where appropriate, of the measures taken to mitigate any possible adverse effects.

Does the GDPR require us to take any other steps in response to a breach?

You should ensure that you record all breaches regardless of local law.           

Article 33(5) requires you to document the facts relating to the breach, its effects and the remedial action taken. This is part of your overall obligation to comply with the accountability principle, and allows us to verify your organisation’s compliance with its notification duties under the GDPR.

As with any security incident, you should investigate whether or not the breach was a result of human error or a systemic issue and see how a recurrence can be prevented – whether this is through better processes, further training or other corrective steps.

What else should we take into account?

The following aren’t specific GDPR requirements, but you may need to take them into account when you’ve experienced a breach.

It is important to be aware that you may have additional notification obligations under other laws if you experience a personal data breach. For example:

If you are a UK trust service provider, you must notify the ICO of a security breach, which may include a personal data breach, within 24 hours under the Electronic Identification and Trust Services (eIDAS) Regulation. Where this includes a personal data breach you can use our eIDAS breach notification form or the GDPR breach-reporting process. However, if you report it to us under the GDPR, this still must be done within 24 hours. Please read our Guide to eIDAS for more information.

If your organisation is an operator of essential services or a digital service provider, you will have incident-reporting obligations under the NIS Directive. These are separate from personal data breach notification under the GDPR. If you suffer an incident that’s also a personal data breach, you will still need to report it to the ICO separately, and you should use the GDPR process for doing so.

You may also need to consider notifying third parties such as the police, insurers, professional bodies, or bank or credit card companies who can help reduce the risk of financial loss to individuals.

The European Data Protection Board, which will replace the Article 29 Working Party, may issue guidelines, recommendations and best practice advice that may include further guidance on personal data breaches. You should look out for any such future guidance. Likewise, you should be aware of any recommendations issued under relevant codes of conduct or sector-specific requirements that your organisation may be subject to.

What happens if we fail to notify?

Failing to notify a breach when required to do so can result in a significant fine up to 10 million euros or 2 per cent of your global turnover. The fine can be combined the ICO’s other corrective powers under Article 58. So it’s important to make sure you have a robust breach-reporting process in place to ensure you detect and can notify a breach, on time; and to provide the necessary details.


What derogations does the GDPR permit?

Article 23 enables Member States to introduce derogations to the GDPR in certain situations.

Member States can introduce exemptions from the GDPR’s transparency obligations and individual rights, but only where the restriction respects the essence of the individual’s fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society to safeguard:

  • national security;
  • defence;
  • public security;
  • the prevention, investigation, detection or prosecution of criminal offences;
  • other important public interests, in particular economic or financial interests, including budgetary and taxation matters, public health and security;
  • the protection of judicial independence and proceedings;
  • breaches of ethics in regulated professions;
  • monitoring, inspection or regulatory functions connected to the exercise of official authority regarding security, defence, other important public interests or crime/ethics prevention;
  • the protection of the individual, or the rights and freedoms of others; or
  • the enforcement of civil law matters.

What about other Member State derogations or exemptions?

Chapter IX provides that Member States can provide exemptions, derogations, conditions or rules in relation to specific processing activities. These include processing that relates to:

  • freedom of expression and freedom of information;
  • public access to official documents;
  • national identification numbers;
  • processing of employee data;
  • processing for archiving purposes and for scientific or historical research and statistical purposes;
  • secrecy obligations; and
  • churches and religious associations.

To assist organisations in applying the requirements of the GDPR in different contexts, we are working to produce guidance in a number of areas. For example, children’s data, CCTV, big data, etc.

Best of Luck & Regards,

Jai Krishna Ponnappan

Wednesday, January 3, 2018

Building Visibility Into Micro-services & Containers

This is a more generalized follow up article to the one posted earlier by me on May 16, 2017.
Ref. -

This paper is an outline of a few best practices, factors and guidelines to be observed or kept in mind while planning, designing and leveraging a Microservices and Containers based architecture within your IT operations, organization, projects or products.

As more enterprises embrace DevOps practices and move their workloads to the cloud, application architects are increasingly looking to design choices that maximize the speed of development and deployment. Two of the fastest growing are containers and microservices.

Named by Gartner in the top 10 technology trends impacting IT infrastructure and operations, containers and microservices are playing a crucial role in cloud adoption and application-driven innovation. Benefits include ease of implementation and operation; acceleration of time-to-market; and streamlined, lower-resource processes.

This article describes how containers and microservices work, the benefits and
challenges of using them, and how a unified view of the enterprise stack and effective
application performance monitoring can help to fortify their benefits and address
their challenges.

DevOps: Collaborative Application Development

In the old-school development model, there was little integration of the separate functions within IT operations. DevOps is an evolved model that facilitates collaboration among operations, development, and quality assurance teams. They collaborate and communicate throughout the software development process.

DevOps is not a separate role held by a single person or group of individuals. Rather, it conceptualizes the structure needed to help operations and development work closely together so that application development and deployment is faster and high quality.

Containers: What Are They?

A container is a lightweight, standalone, executable software package that includes everything needed to run it: code, runtime, system tools, system libraries, and settings. Available for both Linux and Windows, containerized software will always run the same, regardless of the environment. Architecturally, containers are similar to virtual servers; however, unlike VMware or MS Hyper-V, they don’t need the overhead of a hypervisor management layer to function.

This lack of external dependencies makes containers very easy to deploy when compared to traditional application deployment models and a very good fit for DevOps and Continuous Delivery (CD), which promote rapid development and frequent release cycles.

Containers also isolate software from its surroundings — for example, differences between test and production environments — and help reduce conflicts running different software on the same infrastructure.

Containers are currently dominating the application development scene, especially in (but not limited to) cloud computing environments. Popular container platforms include Docker and Kubernetes.

Benefits of a container-based architecture include:

• Faster development
• Flexible deployment
• Improved isolation
• Microservices architecture
• Cost savings

Microservices: What Are They?

Microservices are a type of software architecture where application functionality is implemented through the use of small, self-contained units working together through APIs.

Each service has a limited scope, concentrates on a particular task, and is highly independent.

Some key characteristics of a microservice architecture as defined by Martin Fowler are:


Microservices are independent units that are easily replaced or upgraded. The units communicate with each other through remote procedure calls or web services.

Business capabilities:

Legacy application development often splits teams into areas like the “server-side team” and the “database team”. Microservices development is built around business capability, with responsibility for a complete stack of technical functions, plus UX and project management.

Products rather than projects:

Instead of focusing on a software project that is delivered following completion, microservices treat applications as products of which they take ownership. They establish an ongoing dialogue with a goal of continually matching the app to the business function.

Dumb pipes, smart endpoints: 

Microservice applications require that all logic be maintained within the application and allow the transports for communicating between services to be lightweight.

Decentralized governance:

Teams are encouraged to choose the right tool for their own particular use case. However, some libraries may be shared among teams to avoid duplicative work.

Microservices are changing how teams are structured, allowing organizations to create teams centered on specific services, and giving them autonomy and responsibility in a constrained area. This approach lets the company respond to fluctuating business demands without interrupting core activities.

These are great benefits, but there are also risks that come with microservices proliferation. Microservices architecture is much more complex than legacy systems.

In turn, the environment becomes more complicated because teams must manage and support many moving parts.

Some of the things enterprises must be concerned about include:

• As you add more microservices, you must ensure they can scale together. More granularity means more moving parts, which increases complexity.

• When more services are interacting, the number of possible failure points increases. Smart developers stay one step ahead and plan for failure, ensuring their service remains operational in a diminished capacity if another service is down.

• Transitioning functions of a monolithic app to microservices creates many small components that constantly communicate. Tracing performance problems across tiers for a single business transaction can be difficult. This can be handled by correlating calls with a variety of methods including custom headers, tokens, or IDs.

• Traditional logging is ineffective. You would produce too many logs to easily locate a problem. Logging must be able to correlate events across several platforms. Just as with containers, the expanded use of microservices brings challenges, but they are challenges that can be effectively managed with the right solution.

Enterprises that fail to adequately monitor these new application architecture techniques are risking a lot, from letting bad code releases slip through to making bad decisions based on inaccurately tracked resource utilization. In terms of day-to-day application development, there’s increased opportunity to be faster with solution releases and more informed about application performance.

With a comprehensive understanding of how their applications are performing, enterprises can:

• Find and fix problems faster and more easily

• Make better-informed business decisions about resource allocation

• Have total visibility into all assets and users

• Use the knowledge gained to contribute to enterprise-level strategy planning

However, application performance monitoring is not just about managing technology and providing analytics to IT teams and developers.

Without a clear and detailed understanding of how each component of a software stack works alone and in relation to other components, enterprise users cannot adjust their teams, resources, and roadmaps when they need to be modified.

On-Demand Scalability:

Understanding what is happening within your organization’s cloud environment and Business Transactions helps you make real-time capacity decisions based upon your company’s needs. Regardless of how the cloud/container/microservices landscape changes over the coming years, the ability to monitor, understand, and act based on your application performance data will be critical.

Native Container Monitoring Support: 

Most traditional monitoring solutions offer little insight into KPI metrics related to container environments. This lack of visibility has obvious implications for maintaining system performance and the time it takes to identify and resolve application problems surfaced during the SDLC. Enterprises must address this challenge by deploying the latest generation of application monitoring. Increasingly the solution of choice is a platform, which provides specific support for monitoring container internals.

By understanding all aspects of their application development infrastructure, enterprises are better prepared to seize opportunities that will fuel their growth while protecting their investments, intellectual property, and user relationships.

Containers and microservices are part of a transition to more agile, responsive, and targeted development teams. Grab their benefits by encouraging their use, and do so confidently.

Tuesday, May 16, 2017

Microsoft Announces .NET Architecture Guidance

Four application architecture guidance drafts are available from Microsoft's Developer Division and the Visual Studio product teams.

These drafts cover four areas:

  1. Microservices and Docker, 
  2. ASP.NET Web Applications, 
  3. Azure Cloud Deployment, 
  4. & Xamarin Mobile Applications. 

Each guidance consists of a set of documents appropriate for the topic.  Microsoft wants feedback from the general community on these draft documents.

The Microsoft and Docker guidance consists of an Architecture eBook, a DevOps eBook
a sample application, and a video discussion of appropriate patterns.

The Architecture eBook is an introduction to developing microservices, and managing them with containers.

The sample application is referenced within the book. The book covers topics 
such as choosing the appropriate Docker containers and how to deploy them, 
designing and developing multi-container and microservice-based .NET applications, 
and how to secure them. The guidance is infrastructure agnostic. The DevOps 
book explains how to implement the entire Docker application lifecycle with Microsoft technologies. It is useful both for people who need to learn about Docker, or people knowledgeable about Docker, but want to learn about the Microsoft implementation.

The Web Applications guidance consists of an eBook 
and a sample application

The book provides guidance on building monolithic web applications with 
ASP.NET Core and Azure. It is a complementary guide to the Microsoft and 
Docker guidance book discussed in the previous paragraph. The guidance 
covers the characteristics of modern web applications and their architectural 
principles, as well as how to develop and test ASP.NET Core MVC applications.

The Azure Cloud Deployment guidance consists of a set of reference architectures
an article on best practices, and an article on design patterns

The reference architectures are ordered by scenario and include recommended 
practices and most have a deployable solution. The reference architectures are: 
identity management, hybrid network, network DMZ, VM workloads for Linux 
and Windows, and managed web application. The article on best practices 
focuses on REST and HATEOAS.  The design patterns are indexed by 
category: availability, data management, design and implementation, messaging, 
management and monitoring, performance, and scalability, resiliency, and security. 
The twenty four patterns are also cataloged by pattern name so they can be 
found directly. Each pattern describes the problem to be solved, when it is 
appropriate to use the pattern, and a Microsoft Azure based example. 
Nonetheless, the patterns are generic to any distributed system.

The Xamarin Mobile Application guidance consists of an eBook, a sample application
and an article on architecture patterns

The guidance in the eBook covers building cross-platform enterprise applications 
using the Xamarin UI toolkit. It focuses on the core patterns and architectural 
guidance, specifically the MVVM pattern, dependency injection, navigation, 
validation, configuration management, containerized microservices, security, 
remote data access, and unit testing. The guidance references the sample application. 
Since the guide complements the other architecture guidance, microservices, 
containers, and web applications are not covered in depth. It also is not a 
detailed introduction to Xamarin forms. The guidance can also be used by 
decision makers who want an overview on architecture and technology before 
deciding on a cross-platform strategy. 

The patterns focus on the key architecture concepts, application layers, and the 
basic mobile software patterns such as MVVM, MVC, Business Façade, 
Singleton, Provider, and Async.  The case study illustrates the use of the patterns.

by Jai Krishna Ponnappan

Monday, November 7, 2016

Software Development - Science, Engineering, Art, or Craft

An opinionated look at our trade


There is general consensus that the software development process is imperfect, sometimes grossly so, for any number of human (management, skill, communication, clarity, etc) and technological (tooling, support, documentation, reliability, etc) reasons.  And yet, when it comes to talking about software development, we apply a variety of scientific/formal terms:
  1. Almost every college / university has a Computer Science curriculum.
  2. We use terms like "software engineer" on our resumes.
  3. We use the term "art" (as in "art of software development") to acknowledge the creative/creation process (the first volume of Donald Knuth's The Art of Computer Programming was published in 1968)
  4. There is even a "Manifesto for Software Craftsmanship" created in 2009 (the "Further Reading" link is a lot more interesting than the Manifesto itself.)
The literature on software development is full of phrases that talk about methodologies, giving the ignorant masses, newbie programmers, managers, and even senior developers the warm fuzzy illusion that there is some repeatable process to software development that warrants words like "science" and "engineer."  Those who recognize the loosey-goosey quality of those methodologies probably feel more comfortable describing the software development process as an "art" or a "craft", possibly bordering on "witchcraft."
The Etymology of the Terms we Use
By way of introduction, we'll use the etymology of these terms as a baseline of meaning.
"what is known, knowledge (of something) acquired by study; information;" also "assurance of knowledge, certitude, certainty," from Old French science "knowledge, learning, application; corpus of human knowledge" (12c.), from Latin scientia "knowledge, a knowing; expertness," from sciens (genitive scientis) "intelligent, skilled," present participle of scire "to know," probably originally "to separate one thing from another, to distinguish," related to scindere "to cut, divide," from PIE root *skei- "to cut, to split" (source also of Greek skhizein "to split, rend, cleave," Gothic skaidan, Old English sceadan "to divide, separate;" see shed (v.)).
From late 14c. in English as "book-learning," also "a particular branch of knowledge or of learning;" also "skillfulness, cleverness; craftiness." From c. 1400 as "experiential knowledge;" also "a skill, handicraft; a trade." From late 14c. as "collective human knowledge" (especially "that gained by systematic observation, experiment, and reasoning). Modern (restricted) sense of "body of regular or methodical observations or propositions concerning a particular subject or speculation" is attested from 1725; in 17c.-18c. this concept commonly was called philosophy. Sense of "non-arts studies" is attested from 1670s.
mid-14c., enginour, "constructor of military engines," from Old French engigneor "engineer, architect, maker of war-engines; schemer" (12c.), from Late Latin ingeniare (see engine); general sense of "inventor, designer" is recorded from early 15c.; civil sense, in reference to public works, is recorded from c. 1600 but not the common meaning of the word until 19c (hence lingering distinction as civil engineer). Meaning "locomotive driver" is first attested 1832, American English. A "maker of engines" in ancient Greece was a mekhanopoios.
early 13c., "skill as a result of learning or practice," from Old French art (10c.), from Latin artem (nominative ars) "work of art; practical skill; a business, craft," from PIE *ar-ti- (source also of Sanskrit rtih "manner, mode;" Greek arti "just," artios "complete, suitable," artizein "to prepare;" Latin artus "joint;" Armenian arnam "make;" German art "manner, mode"), from root *ar- "fit together, join" (see arm (n.1)).

In Middle English usually with a sense of "skill in scholarship and learning" (c. 1300), especially in the seven sciences, or liberal arts. This sense remains in Bachelor of Arts, etc. Meaning "human workmanship" (as opposed to nature) is from late 14c. Sense of "cunning and trickery" first attested c. 1600. Meaning "skill in creative arts" is first recorded 1610s; especially of painting, sculpture, etc., from 1660s. Broader sense of the word remains in artless.
Old English cræft (West Saxon, Northumbrian), -creft (Kentish), originally "power, physical strength, might," from Proto-Germanic *krab-/*kraf- (source also of Old Frisian kreft, Old High German chraft, German Kraft "strength, skill;" Old Norse kraptr "strength, virtue"). Sense expanded in Old English to include "skill, dexterity; art, science, talent" (via a notion of "mental power"), which led by late Old English to the meaning "trade, handicraft, calling," also "something built or made." The word still was used for "might, power" in Middle English.
Methdology (Method)
And for good measure:
early 15c., "regular, systematic treatment of disease," from Latin methodus "way of teaching or going," from Greek methodos "scientific inquiry, method of inquiry, investigation," originally "pursuit, a following after," from meta- "after" (see meta-) + hodos "a traveling, way" (see cede). Meaning "way of doing anything" is from 1580s; that of "orderliness, regularity" is from 1610s. In reference to a theory of acting associated with Russian director Konstantin Stanislavsky, it is attested from 1923.
Common Themes
Looking at the etymology of these words reveals some common themes (created using the amazing online mind-mapping tool MindMup):

What associations do we glean from this?
  • Science: Skill
  • Art: Craft, Skill
  • Craft: Art, Skill, Science
  • Methodology: Science
Interestingly, the term "engineer" does not directly associate with the etymology of science, art, craft, or methodology, but one could say those concepts are facets of "engineering" (diagram created with the old horse Visio):

This is arbitrary and doesn't mean that we're implying right away that software development is engineering, but it does give one an idea of what it might mean to call oneself a "software engineer."

The Scientific Method

How many software developers could actually, off the top of their head, describe the scientific method?  Here is one description:

The scientific method is an ongoing process, which usually begins with observations about the natural world. Human beings are naturally inquisitive, so they often come up with questions about things they see or hear and often develop ideas (hypotheses) about why things are the way they are. The best hypotheses lead to predictions that can be tested in various ways, including making further observations about nature. In general, the strongest tests of hypotheses come from carefully controlled and replicated experiments that gather empirical data. Depending on how well the tests match the predictions, the original hypothesis may require refinement, alteration, expansion or even rejection. If a particular hypothesis becomes very well supported a general theory may be developed.
Although procedures vary from one field of inquiry to another, identifiable features are frequently shared in common between them. The overall process of the scientific method involves making conjectures (hypotheses), deriving predictions from them as logical consequences, and then carrying out experiments based on those predictions. A hypothesis is a conjecture, based on knowledge obtained while formulating the question. The hypothesis might be very specific or it might be broad. Scientists then test hypotheses by conducting experiments. Under modern interpretations, a scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
We can summarize this as an iterative process of:
  1. Observation
  2. Question
  3. Hypothesize
  4. Create tests
  5. Gather data
  6. Revise hypothesis (go to step 3)
  7. Develop general theories and test for consistency with other theories and existing data

Software Development: Knowledge Acquisition and Prototyping / Proof of Concept
We can map the scientific method steps into the process of the software development that includes knowledge acquisition and prototyping:
  1. Observe current processes.
  2. Ask questions about current processes to establish patterns.
  3. Hypothesize how those processes can be automated and/or improved.
  4. Create some tests that measure success/failure and performance/accuracy improvement.
  5. Gather some test data for our tests.
  6. Create some prototypes / proof of concepts and revise our hypotheses as a result of feedback as to whether the new processes meet existing process requirements, are successful, and/or improve performance/accuracy.
  7. Abstract the prototypes into general solutions and verify that they are consistent with other processes and data.

Where we Fail
Software development actually fails at most or all of these steps.  The scientific method is typically applied to the observation of nature.  As with nature, our observations sometimes come to the wrong conclusions.  Just as we observe the sun going around the earth and erroneously come up with a geocentric theory of the earth, sun, moon, and planets, our understanding of processes is fraught with errors and omissions.  As with observing nature, we eventually hit the hard reality that what we understood about the process is incomplete or incorrect.  One pitfall is that, unlike nature, we also have the drawback that as we're prototyping and trying to prove that our new software processes are better, the old processes are also evolving, so by the time we publish the application, it is, like a new car being driven off the lot, already obsolete.
Also, the software development process in general, and the knowledge acquisition phase in specific, typically doesn't determine how to measure success/failure and performance improvement/accuracy of an existing process, simply because we, as developers, lack the tools and resources to measure existing processes (there are exceptions of course.)  We are, after all, attempting to replace a process, not just understand an existing, immutable process and build on that knowledge.  How do we accurately measure the cost/time savings of a new process when we can barely describe an existing process?  How do we compare the training required to efficiently utilize a new process when everyone has "cultural knowledge" of the old process?  How do we overcome the resistance to new technology?  Nature doesn't care if we improve on a biological process by creating vaccines and gene therapies to cure cancer, but secretaries and entrenched engineers alike do very much care about new processes that require new and different skills and potentially threaten to eliminate their jobs.
And frequently enough, the idea of how an existing process can be improved comes from incomplete (or completely omitted) observation and questions, but begins at step #3, imagining some software solution that "fixes" whatever concept the manager or CEO thinks can be improved upon, and yes, in many cases, the engineer thinks can be improved upon--we see this in the glut of new frameworks, new languages and forked GitHub repos that all claim to improve upon someone else's solution to a supposedly broken process.  The result is a house of cards of poorly documented and buggy implementations.  The result is why Stack Overflow exists.
As to the last step, abstracting the prototypes into general solutions, this is the biggest failure of all of software development -- re-use.  As Douglas Schmidt wrote in C++ Report magazine, in 1999 (source):
Although computing power and network bandwidth have increased dramatically in recent years, the design and implementation of networked applications remains expensive and error-prone. Much of the cost and effort stems from the continual re-discovery and re-invention of core patterns and framework components throughout the software industry.
Note that he wrote 17 years ago (at the time of this article) and it still remains true today.
Reuse has been a popular topic of debate and discussion for over 30 years in the software community. Many developers have successfully applied reuse opportunistically, e.g., by cutting and pasting code snippets from existing programs into new programs. Opportunistic reuse works fine in a limited way for individual programmers or small groups. However, it doesn't scale up across business units or enterprises to provide systematic software reuse. Systematic software reuse is a promising means to reduce development cycle time and cost, improve software quality, and leverage existing effort by constructing and applying multi-use assets like architectures, patterns, components, and frameworks.
Like many other promising techniques in the history of software, however, systematic reuse of software has not universally delivered significant improvements in quality and productivity.
The Silver Lining, Sort Of
Now granted, we can install a variety of flavors of Linux on everything from desktop computers to single board computers like the Raspberry Pi or Beaglebone.  Cross-compilers let us compile and re-use code on multiple processors and hardware architectures.  .NET Core is open source, running on Windows, Mac, and Linux.  And script languages like Python, Ruby, and Javascript hide the low level compiled implementations in the subconscious mind of the program, enabling code portability for our custom applications.  However, we are still stuck in the realm of, as Mr. Schmidt puts it: "Component- and framework-based middleware technologies."  Using those technologies, we still have to "re-discover and re-invent" much of the glue that binds these components.

The Engineering Method

What is engineering?
Engineering is the application of mathematics, empirical evidence and scientific, economic, social, and practical knowledge in order to invent, innovate, design, build, maintain, research, and improve structures, machines, tools, systems, components, materials, processes and organizations. (source)

Seriously?  And we have the hubris to call what we do software engineering?
"Application of" can only assume a deep understanding of whatever items in the list are applicable to what we're building.
"In order to" can only assume proficiency in the set of necessary skills.
And the "New stuff" certainly assumes that 1) it works, 2) it works well enough to be used, and 3) people will want it and know how to use it.
Engineering implies knowledge, skill, and successful adoption of the end product.  To that end, there are formal methodologies that have been developed, for example, the Department of Energy Systems Engineering Methodology (SEM):
The Department of Energy Systems Engineering Methodology (SEM) provides guidance for information systems engineering, project management, and quality assurance practices and procedures. The primary purpose of the methodology is to promote the development of reliable, cost-effective, computer-based solutions while making efficient use of resources. Use of the methodology will also aid in the status tracking, management control, and documentation efforts of a project. (source)
Engineering Process Overview

Notice some of the language:
  • to promote the development of:
  • reliable
  • cost-effective
  • computer-based solutions
  • the methodology will also aid in the:
  • status tracking
  • management control
  • and documentation efforts
This actually sounds like an attempt to apply the scientific method successfully to software development.
Another Engineering Model - the Spiral Development Model
There are many engineering models to choose from, but here is one more, the Spiral Development Model.  It consists of Phases, Reviews, and Iterations (source):
  • Phases
  • Review Milestones Process
  • Iterations
  • Inception Phase
  • Elaboration Phase
  • Construction Phase
  • Transition Phase
  • Life Cycle Objectives Review
  • Life Cycle Architecture Review
  • Initial Operating Capability Review
  • Product Readiness Review + Functional Configuration Audit
  • Iteration Structure Description: Composition of an 8-Week Iteration
  • Design Period Activities
  • Development Period Activities
  • Wrap-Up Period Activities

In each phase, the Custom Engineering Group (CEG) works closely with the customer to establish clear milestones to measure progress and success. (source)
This harkens a wee bit to Agile's "customer collaboration", but notice again the concepts of:
  • a project plan
  • a test plan
  • supporting documentation
  • customer sign-off of specifications
And the formality implied in:
  • investigation
  • design
  • development
  • test and user acceptance
True software engineering practically demands skilled professionals, whether they are developers, managers, technical writers, contract negotiators, etc.  Certainly there is room for the people of all skill levels, but the structure of a real engineering team is such that "juniors" are mentored, monitored, and given appropriate tasks for their skill level and in fact, even "senior" people review each other's work.
Where we Fail
Instead, we have Agile Development and its Manifesto (source, bold is mine):
  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan
How can we fail to conclude that Agile Development is anything but engineering?
The Agile Manifesto appears to specifically de-emphasizes a scientific method for software development, and it also de-emphasizes the skills actual engineering requires of both software developers and managers, instead emphasizing an ill-defined psychological approach to software development involving people, interactions, collaboration, and flexibility.  While these are necessary skills as well, they are not more important over the skills and formal processes that software development requires.  In fact, Agile creates an artificial tension between the two sides of each of the bullet points above, often leading to an extreme adoption of the left side, for example, "the code is the documentation."
From real world experience, the Manifesto would be better written replacing the word "over" with "and":
  • Individuals and interactions and processes and tools
  • Working software and comprehensive documentation
  • Customer collaboration and contract negotiation
  • Responding to change and following a plan
However this approach is rarely embraced by over-eager coders that want to dive into a new-fangled technology and start coding, and it also isn't embraced by managers that view "over" as (incorrectly) reducing cost and time to market, vs. "and" as (incorrectly) increasing cost and time to market.  As with any approach, a balance must be struck that is appropriate for the business domain and product to be delivered, but rarely does that conversation happen.
Regardless, Agile Development is not engineering.  But is it a methodology?

Software Development Methodologies: Are They Really?

As Chris Hohmann writes:
[I] chose the Macmillan online dictionary and found this:

Approach (noun): a particular way of thinking about or dealing with something
Philosophy: a system of beliefs that influences someone’s decisions and behaviour. A belief or attitude that someone uses for dealing with life in general
Methodology: the methods and principles used for doing a particular kind of work, especially scientific or academic research
Do we view whatever structure we impose on the software development process as an approach, a philosophy, or a methodology?
So-Called Methodologies
Wikipedia has a page here and here that lists dozens of philosophies and IT Knowledge Portal lists 13 software development methodologies here:
  1. Agile
  2. Crystal
  3. Dynamic Systems Development
  4. Extreme Programming
  5. Feature Drive Development
  6. Joint Application Development
  7. Lean Development
  8. Rapid Application Development
  9. Rational Unified Process
  10. Scrum
  11. Spiral (not the Spiral Development Model described earlier)
  12. System Development Life Cycle
  13. Waterfall (aka Traditional)
And even this list is missing so-called methodologies like Test Driven Development and Model Driven Development.
As methodology derives from Greek methodos: "scientific inquiry, method of inquiry, investigation", the definition of Methodology from Macmillan seems reasonable enough.  Let's look at two of the most common (and seen as opposing) so-called methodologies, Waterfall and Agile.
The waterfall model is a sequential (non-iterative) design process, used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of conception, initiation, analysis, design, construction, testing, production/implementation and maintenance.  The waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Because no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development..
The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce, although Royce did not use the term waterfall in that article. Royce presented this model as an example of a flawed, non-working model; which is how the term is generally used in writing about software development—to describe a critical view of a commonly used software development practice.
The earliest use of the term "waterfall" may have been in a 1976 paper by Bell and Thayer.
In 1985, the United States Department of Defense captured this approach in DOD-STD-2167A, their standards for working with software development contractors, which stated that "the contractor shall implement a software development cycle that includes the following six phases: Preliminary Design, Detailed Design, Coding and Unit Testing, Integration, and Testing." (source)
Waterfall might actually be categorized as an approach, as there is no specific guidance for the six phases mentioned above -- they need to be part of the development cycle, but any specific scientific rigor applied to each of the phases is entirely missing.  It is ironic that the process originates from manufacturing and construction industries, where there is often considerable mathematical analysis / modeling performed before construction begins.  And certainly in the 1960's, construction of electronic hardware was expensive, and again, rigorous electrical circuit analysis using scientific methods would greatly mitigate the cost of after-the-fact changes.
There are a variety of diagrams for Agile Software Development, I've chosen one that maps somewhat to the "Scientific Method" above:

Ironically, IT Knowledge Portal describes Agile as "a conceptual framework for undertaking software engineering projects."  It's difficult to understand how a conceptual framework can be described as a methodology.  It seems at best to be an approach.  More telling though is that the term "methodology" appears to have no foundation in "scientific inquiry, method of inquiry, investigation" when it comes to the software development process.

Software Development as a Craft

A craft is a pastime or a profession that requires particular skills and knowledge of skilled work. (source)
Historically, one's skill at a craft was described in three levels:
  1. Apprentice
  2. Journeyman
  3. Master Craftsman
When we look at software development as a craft, one of the advantages is that we move away from the Scientific Method and its emphasis on the natural world and physical processes.  We also move away from the conundrum of applying an approach, philosophy, or methodology to the software development process.  Instead, viewing software development as a craft emphasizes the skill of the craftsman (or woman.)  We can also begin to establish rankings of "craft skill" with the general skills discussed above in the section on Engineering.
An apprenticeship is a system of training a new generation of practitioners of a trade or profession with on-the-job training and often some accompanying study (classroom work and reading). Apprenticeship also enables practitioners to gain a license to practice in a regulated profession. Most of their training is done while working for an employer who helps the apprentices learn their trade or profession, in exchange for their continued labor for an agreed period after they have achieved measurable competencies. Apprenticeships typically last 3 to 6 years. People who successfully complete an apprenticeship reach the "journeyman" or professional certification level of competence. (source)
A "journeyman" is a skilled worker who has successfully completed an official apprenticeship qualification in a building trade or craft. They are considered competent and authorized to work in that field as a fully qualified employee. A journeyman earns their license through education, supervised experience, and examination. [1] Although a journeyman has completed a trade certificate and is able to work as an employee, they are not yet able to work as a self-employed master craftsman.[2] The term journeyman was originally used in the medieval trade guilds. Journeymen were paid each day, and this is where the word ‘journey’ derived from- journée meaning ‘a day’ in French. Each individual guild generally recognized three ranks of workers; apprentices, journeymen, and masters. A journeyman, as a qualified tradesman could become a master, running their own business although most continued working as employees. (source)
Master Craftsman
A master craftsman or master tradesman (sometimes called only master or grandmaster) was a member of a guild. In the European guild system, only masters and journeymen were allowed to be members of the guild.  An aspiring master would have to pass through the career chain from apprentice to journeyman before he could be elected to become a master craftsman. He would then have to produce a sum of money and a masterpiece before he could actually join the guild. If the masterpiece was not accepted by the masters, he was not allowed to join the guild, possibly remaining a journeyman for the rest of his life.  Originally, holders of the academic degree of "Master of Arts" were also considered, in the Medieval universities, as master craftsmen in their own academic field. (source)
Where do we Fit in This Scheme?
While we all begin as apprentices and our "training" is often through the anonymity of books and the Internet as well as code camps, our peers, and looking at other people's code, we eventually develop various technical and interpersonal skills.  And software development is somewhat unique as a tradecraft because we are always at different levels depending on the technology that we are attempting to use.  So when we use the terms "senior developer," we're really making an implied statement about what skills we have that rank us at least as a journeyman.  One of the common threads of the three levels of a tradecraft is that there is a master craftsman working with the apprentice and journeyman, and in fact certifies the person to move on to the next step.  Also, becoming a master craftsman required developing a "masterpiece," which is ironic because I know a few software developers that consider themselves "senior" and only have failed software projects under their belt.
And as Donald Knuth wrote in his preface to The Art of Computer Programming: This book is...designed to train the reader in various skills that go into a programmer's craft."
Where we Fail
Analogies to the software development process includes code reviews, peer programming, and even employee vs. contractor or business owner.  Unfortunately, the craft model of software development is not really adopted, perhaps because employers don't want to use a system that harks back at least to the medieval days.  Code reviews, if they are done at all, can be very superficial and not part of a systematic development process.  Peer programming is usually considered a waste of resources.  Companies rarely provide the equivalent of a "certificate of accomplishment," which would be particularly beneficial when people are no longer in school and they are learning skills that school never taught them (the entire Computer Science degree program is questionable as well in its ability to actually producing, at a minimum, a journeyman in software development.)
Furthermore, given the newness of the software development field and the rapidly changing tools and technologies, the opportunity to work with a master craftsman is often hard to come by.  In many situations, one can at best apply general principles gleaned from skills in other technologies to the apprentice work in a new technology (besides the time honored practice of "hitting the books" and looking up simple problems on Stack Overflow.)  In many cases, the process of becoming a journeyman or master craftsman is a bootstrapping process with ones peers.

Software Development as an Art

(With apologies to the reader, but Knuth states the matter so eloquently, I cannot express it better.)
Quoting Donald Knuth again: "The process of preparing programs for a digital computer is especially attractive, not only because it can be economically and scientifically rewarding, but also because it can be an aesthetic experience much like composing poetry or music." and: "I have tried to include all of the known ideas about sequential computer programming that are both beautiful and easy to state."
So there is an aesthetic experience associated with programming, and programs can have the qualities of being beautiful, perhaps where the aesthetic experience, the beauty of a piece of code, lies in its ability to easily describe (or self-describe) the algorithm.  In his excellent talk on Computer Programming as an Art (online PDF), Knuth states: "Meanwhile we have actually succeeded in making our discipline a science, and in a remarkably simple way: merely by deciding to call it 'computer science.'"  He makes a salient statement about science vs. art: "Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with.  Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something."
What a concise description of software development as an art - the process of learning something sufficiently to automate it!  But there's more: "...we should continually be striving to transform every art into a science: in the process, we advance the art."  An intriguing notion.  He also quotes Emma Lehmer (11/6/1906 - 5/7/2007, mathematician), when she wrote in 1956 that she had found coding to be "an exacting science as well as an intriguing art."  Knuth states that "The chief goal of my work as educator and author is to help people learn how to write beautiful programs...The possibility of writing beautiful programs, even in assembly language, is what got me hooked on programming in the first place."
And: "Some programs are elegant, some are exquisite, some are sparkling.  My claim is that it is possible to writegrand programs, noble programs, truly magnificent ones!"
Knuth then continues with some specific elements of the aesthetics of a program:
  • It of course has to work correctly.
  • It won't be hard to change.
  • It is easily readable and understandable.
  • It should interact gracefully with its users, especially when recovering from human input errors and in providing meaningful error message or flexible input formats.
  • It should use a computer's resources efficiently (but in the write places and at the right times, and not prematurely optimizing a program.)
  • Aesthetic satisfaction is accomplished with limited tools.
  • "The use of our large-scale machines with their fancy operating systems and languages doesn't really seem to engender any love for programming, at least not at first."
  • "...we should make use of the idea of limited resources in our own education."
  • "Please, give us tools that are a pleasure to use, especially for our routine assignments, instead of providing something we have to fight with.  Please, give us tools that encourage us to write better programs, by enhancing our pleasure when we do so."
Knuth concludes his talk with: "We have seen that computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty.  A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better."
Reading what Knuth wrote, one can understand the passion that was experienced with the advent of the 6502 processor and the first Apple II, Commodore PET, and TRS-80.  One can see the resurrection of that passion with single board computers such as the rPi, Arduino, and Beaglebone.  In fact, whenever any new technology appears on the horizon (for example, the Internet of Things) we often see the hobbyist passionately embracing the idea.  Why?  Because as a new idea, it speaks to our imagination.  Similarly, new languages and frameworks ignite passion with some programmers, again because of that imaginative, creative quality that something new and shiny holds.
Where we Fail
Unfortunately, we also fail at bringing artistry to software development.  Copy and paste is not artistic.  An inconsistent styling of code is not beautiful.  An operating system that produces a 16 byte hex code for an error message is not graceful.  The nightmare of NPM (as an example) dependencies (a great example here) is not elegant, beautiful, and certainly is not a pleasure to use.  Knuth reminds us that programming should for the most part be pleasurable.  So why have we created languages like VB and Javascript that we love to hate, use syntaxes like HTML and CSS that are complex and not fully supported across all the flavors of browsers and browser versions, and why do we put up with them?  But worse, why do we ourselves write ugly, unreadable, un-maintainable code?  Did we ever even look at programming from an aesthetic point of view?  It is unfortunate that in a world of deadlines and performance reviews we seem to have lost (if we ever had it to begin with) the ability to program artistically.

Conclusion: What Then is the Software Development Process?

Most software development processes attempt to replicate well-developed processes of working with nature and physical construction.  This is the wrong approach.  Software usually does not involve things where everyone can (usually) agree on the observations and "science."  Rather, software development involves people (along with their foibles) and concepts (of which people often disagree.)  One distinguishing aspect is when software is designed to control hardware (a thing) whether it's the cash dispenser of an ATM, a self-driving car, or a spacecraft.  Because the "thing" is then well known, the software development process has a firm foundation for construction.  However, much in software development lacks that firm foundation.  The debate over which methodology is best is inappropriate because none of the so-called methodologies put forward over the years are actual methodologies.  Discussing approaches and philosophy might be fun over a beer but does it really result in advancing the process?
In fact, the question of defining a software development process is wrong because there simply is no right answer.  Emphasis should be placed on the skills of the developers themselves and how they transform the unknown into the known, with artistry.  If anything, the software development process needs to be approached like a craft, in which skilled people mentor the less skilled and in which masters of the craft recognize, in some tangible manner, the "leveling up" of an apprentice or journeyman.  The "process" is best described as a transformation of art into science through formalization and knowledge that can be shared with others.  When in the process, the work that we do must be personally satisfying.  While there is satisfaction in "it works," the true artisan finds personal satisfaction in also creating something that is aesthetically pleasing, whether it is (to name a few) in their coding style, writing a beautiful algorithm, or applying some technology or language feature elegantly to solve a complex problem.  When we critique someone else's work, it is not sufficient to count how many unit tests pass.  A true masterpiece includes the process as well as the code and the user experience, all of which should combine aspects of artistry and science.  If we looked at software development in this way, we might eventually get to a better place, where processes can actually be called methodologies and with languages and tools that we can say were truly engineered.
What Then is A Senior Developer or a Software Engineer?
Perhaps: Someone who is senior or calls themselves a software engineer is able to apply scientific methods and has formal methodologies for their work process, demonstrates skill in the domain, tools, and languages, would be considered a master craftsman (ie, proven track record, ability to teach others, etc.) and also treats development as an art, meaning that it requires creativity, imagination, the ability to think outside of the box of said skills, and that those skills are applied with an aesthetic sense.