An Overview of Financial Management and the Financial Environment

In a global beauty contest for companies, the winner is . . . Apple Computer.
Or at least Apple is the most admired company in the world, according to Fortune
magazine’s annual survey. The others in the global top ten are Google, Berkshire Hathaway,
Southwest Airlines, Procter & Gamble, Coca-Cola, Amazon.com, FedEx, Microsoft, and
McDonald’s. What do these companies have that separates them from the rest of the pack?
According to a survey of executives, directors, and security analysts, these
companies have very high average scores across nine attributes:
(1) innovative-ness,
(2) quality of management,
(3) long-term investment value,
(4) social responsibility,
(5) employee talent,
(6) quality of products and services,
(7) finan-cial soundness,
(8) use of corporate assets, and
(9) effectiveness in doing business globally.

After culling weaker companies, the final rankings are then determined
by over 3,700 experts from a wide variety of industries.
What do these companies have in common? First, they have an incredible
focus on using technology to understand their customers, reduce costs, reduce
inventory, and speed up product delivery. Second, they continually innovate and
invest in ways to differentiate their products. Some are known for game-changing
products, such as Apple’s iPad. Others continually introduce small improvements,
such as Southwest Airlines’s streamlined boarding procedures.
In addition to their acumen with technology and customers, they are also on
the leading edge when it comes to training employees and providing a workplace
in which people can thrive.
Prior to the global economic crisis, these companies maintained reasonable
debt levels and avoided overpaying for acquisitions. This allowed them to weather
the crisis and position themselves for stronger subsequent performance than many
of their competitors.
In a nutshell, these companies reduce costs by having innovative production pro-
cesses, they create value for customers by providing high-quality products and services,
and they create value for employees by training and fostering an environment that
allows employees to utilize all of their skills and talents. As you will see throughout this
book, the resulting cash flow and superior return on capital also create value for investors.

Meninggalkan komentar

Introduction to Cybersecurity

Chapter 1: The Need for Cybersecurity

This chapter explains what cybersecurity is and why the demand for cybersecurity professionals is growing. It explains what your online identity and data is, where it is, and why it is of interest to cyber criminals.

This chapter also discusses what organizational data is, and why it must be protected. It discusses who the cyber attackers are and what they want. Cybersecurity professionals must have the same skills as the cyber attackers, but cybersecurity professionals must work within the bounds of the local, national and international law. Cybersecurity professionals must also use their skills ethically.

Also included in this chapter is content that briefly explains cyber warfare and why nations and governments need cybersecurity professionals to help protect their citizens and infrastructure.

What is Cybersecurity?

The connected electronic information network has become an integral part of our daily lives. All types of organizations, such as medical, financial, and education institutions, use this network to operate effectively. They utilize the network by collecting, processing, storing, and sharing vast amounts of digital information. As more digital information is gathered and shared, the protection of this information is becoming even more vital to our national security and economic stability.

Cybersecurity is the ongoing effort to protect these networked systems and all of the data from unauthorized use or harm. On a personal level, you need to safeguard your identity, your data, and your computing devices. At the corporate level, it is everyone’s responsibility to protect the organization’s reputation, data, and customers. At the state level, national security, and the safety and well-being of the citizens are at stake.

Your Online and Offline Identity

As more time is spent online, your identity, both online and offline, can affect your life. Your offline identity is the person who your friends and family interact with on a daily basis at home, at school, or work. They know your personal information, such as your name, age, or where you live. Your online identity is who you are in cyberspace. Your online identity is how you present yourself to others online. This online identity should only reveal a limited amount of information about you.

You should take care when choosing a username or alias for your online identity. The username should not include any personal information. It should be something appropriate and respectful. This username should not lead strangers to think you are an easy target for cybercrimes or unwanted attention.

Your Data

Any information about you can be considered to be your data. This personal information can uniquely identify you as an individual. This data includes the pictures and messages that you exchange with your family and friends online. Other information, such as name, social security number, date and place of birth, or mother‘s maiden name, is known by you and used to identify you. Information such as medical, educational, financial, and employment information, can also be used to identify you online.

Medical Records

Every time you go to the doctor’s office, more information is added to your electronic health records (EHRs). The prescription from your family doctor becomes part of your EHR. Your EHR includes your physical health, mental health, and other personal information that may not be medically-related. For example, if you had counseling as a child when there were major changes in the family, this will be somewhere in your medical records. Besides your medical history and personal information, the EHR may also include information about your family.

Medical devices, such as fitness bands, use the cloud platform to enable wireless transfer, storage and display of clinical data like heart rates, blood pressures and blood sugars. These devices can generate an enormous amount of clinical data that could become part of your medical records.

Education Records

As you progress through your education, information about your grades and test scores, your attendance, courses taken, awards and degrees rewarded, and any disciplinary reports may be in your education record. This record may also include contact information, health and immunization records, and special education records including individualized education programs (IEPs).

Employment and Financial Records

Your financial record may include information about your income and expenditures. Tax records could include paycheck stubs, credit card statements, your credit rating and other banking information. Your employment information can include your past employment and your performance.

Where is Your Data?

All of this information is about you. There are different laws that protect your privacy and data in your country. But do you know where your data is?

When you are at the doctor’s office, the conversation you have with the doctor is recorded in your medical chart. For billing purposes, this information may be shared with the insurance company to ensure appropriate billing and quality. Now, a part of your medical record for the visit is also at the insurance company.

The store loyalty cards maybe a convenient way to save money for your purchases. However, the store is compiling a profile of your purchases and using that information for its own use. The profile shows a buyer purchases a certain brand and flavor of toothpaste regularly. The store uses this information to target the buyer with special offers from the marketing partner. By using the loyalty card, the store and the marketing partner have a profile for the purchasing behavior of a customer.

When you share your pictures online with your friends, do you know who may have a copy of the pictures? Copies of the pictures are on your own devices. Your friends may have copies of those pictures downloaded onto their devices. If the pictures are shared publicly, strangers may have copies of them, too. They could download those pictures or take screenshots of those pictures. Because the pictures were posted online, they are also saved on servers located in different parts of the world. Now the pictures are no longer only found on your computing devices.

Your Computing Devices

Your computing devices do not just store your data. Now these devices have become the portal to your data and generate information about you.

Unless you have chosen to receive paper statements for all of your accounts, you use your computing devices to access the data. If you want a digital copy of the most recent credit card statement, you use your computing devices to access the website of the credit card issuer. If you want to pay your credit card bill online, you access the website of your bank to transfer the funds using your computing devices. Besides allowing you to access your information, the computing devices can also generate information about you.

With all this information about you available online, your personal data has become profitable to hackers.

They Want Your Money

If you have anything of value, the criminals want it.

Your online credentials are valuable. These credentials give the thieves access to your accounts. You may think the frequent flyer miles you have earned are not valuable to cybercriminals. Think again. After approximately 10,000 American Airlines and United accounts were hacked, cybercriminals booked free flights and upgrades using these stolen credentials. Even though the frequent flyer miles were returned to the customers by the airlines, this demonstrates the value of login credentials. A criminal could also take advantage of your relationships. They could access your online accounts and your reputation to trick you into wiring money to your friends or family. The criminal can send messages stating that your family or friends need you to wire them money so they can get home from abroad after losing their wallets.

The criminals are very imaginative when they are trying to trick you into giving them money. They do not just steal your money; they could also steal your identity and ruin your life.

They Want Your Identity

Besides stealing your money for a short-term monetary gain, the criminals want long-term profits by stealing your identity.

As medical costs rise, medical identity theft is also on the rise. The identity thieves can steal your medical insurance and use your medical benefits for themselves, and these medical procedures are now in your medical records.

The annual tax filing procedures may vary from country to country; however, cybercriminals see this time as an opportunity. For example, the people of the United States need to file their taxes by April 15 of each year. The Internal Revenue Service (IRS) does not check the tax return against the information from the employer until July. An identity thief can file a fake tax return and collect the refund. The legitimate filers will notice when their returns are rejected by IRS. With the stolen identity, they can also open credit card accounts and run up debts in your name. This will cause damage to your credit rating and make it more difficult for you to obtain loans.

Personal credentials can also lead to corporate data and government data access.

Types of Organizational Data

Traditional Data

Corporate data includes personnel information, intellectual properties, and financial data. The personnel information includes application materials, payroll, offer letters, employee agreements, and any information used in making employment decisions. Intellectual property, such as patents, trademarks and new product plans, allows a business to gain economic advantage over its competitors. This intellectual property can be considered a trade secret; losing this information can be disastrous for the future of the company. The financial data, such as income statements, balance sheets, and cash flow statements of a company gives insight into the health of the company.

Internet of Things and Big Data

With the emergence of the Internet of Things (IoT), there is a lot more data to manage and secure. IoT is a large network of physical objects, such as sensors and equipment that extend beyond the traditional computer network. All these connections, plus the fact that we have expanded storage capacity and storage services through the cloud and virtualization, lead to the exponential growth of data. This data has created a new area of interest in technology and business called “Big Data”. With the velocity, volume, and variety of data generated by the IoT and the daily operations of business, the confidentiality, integrity and availability of this data is vital to the survival of the organization.

Traditional Data

Corporate data includes personnel information, intellectual properties, and financial data. The personnel information includes application materials, payroll, offer letters, employee agreements, and any information used in making employment decisions. Intellectual property, such as patents, trademarks and new product plans, allows a business to gain economic advantage over its competitors. This intellectual property can be considered a trade secret; losing this information can be disastrous for the future of the company. The financial data, such as income statements, balance sheets, and cash flow statements of a company gives insight into the health of the company.

Internet of Things and Big Data

With the emergence of the Internet of Things (IoT), there is a lot more data to manage and secure. IoT is a large network of physical objects, such as sensors and equipment that extend beyond the traditional computer network. All these connections, plus the fact that we have expanded storage capacity and storage services through the cloud and virtualization, lead to the exponential growth of data. This data has created a new area of interest in technology and business called “Big Data”. With the velocity, volume, and variety of data generated by the IoT and the daily operations of business, the confidentiality, integrity and availability of this data is vital to the survival of the organization.

Confidentiality, Integrity, and Availability

Confidentiality, integrity and availability, known as the CIA triad (Figure 1), is a guideline for information security for an organization. Confidentiality ensures the privacy of data by restricting access through authentication encryption. Integrity assures that the information is accurate and trustworthy. Availability ensures that the information is accessible to authorized people.

Confidentiality

Another term for confidentiality would be privacy. Company policies should restrict access to the information to authorized personnel and ensure that only those authorized individuals view this data. The data may be compartmentalized according to the security or sensitivity level of the information. For example, a Java program developer should not have to access to the personal information of all employees. Furthermore, employees should receive training to understand the best practices in safeguarding sensitive information to protect themselves and the company from attacks. Methods to ensure confidentiality include data encryption, username ID and password, two factor authentication, and minimizing exposure of sensitive information.

Integrity

Integrity is accuracy, consistency, and trustworthiness of the data during its entire life cycle. Data must be unaltered during transit and not changed by unauthorized entities. File permissions and user access control can prevent unauthorized access. Version control can be used to prevent accidental changes by authorized users. Backups must be available to restore any corrupted data, and checksum hashing can be used to verify integrity of the data during transfer.

A checksum is used to verify the integrity of files, or strings of characters, after they have been transferred from one device to another across your local network or the Internet. Checksums are calculated with hash functions. Some of the common checksums are MD5, SHA-1, SHA-256, and SHA-512. A hash function uses a mathematical algorithm to transform the data into fixed-length value that represents the data, as shown in Figure 2. The hashed value is simply there for comparison. From the hashed value, the original data cannot be retrieved directly. For example, if you forgot your password, your password cannot be recovered from the hashed value. The password must be reset.

After a file is downloaded, you can verify its integrity by verifying the hash values from the source with the one you generated using any hash calculator. By comparing the hash values, you can ensure that the file has not been tampered with or corrupted during the transfer.

Availability

Maintaining equipment, performing hardware repairs, keeping operating systems and software up to date, and creating backups ensure the availability of the network and data to the authorized users. Plans should be in place to recover quickly from natural or man-made disasters. Security equipment or software, such as firewalls, guard against downtime due to attacks such as denial of service (DoS). Denial of service occurs when an attacker attempts to overwhelm resources so the services are not available to the users.

[ SISCO Networking Academy ]

Meninggalkan komentar

On the host machine Ubuntu Linux Desktop LTS 18.04.4, as my native OS, and using Docker Engine – Community for Ubuntu, so i can run any kind of apps or any other virtual Linux distros (e.g. CentOS Linux 8.1), by containerization of those apps and or other Linux distros.

Instalasi Docker Engine – Community for Ubuntu

(A) Objective:

On the host machine Ubuntu Linux Desktop LTS 18.04.4, as my native OS, and using Docker Engine - Community for Ubuntu,
so i can run any kind of apps or any other virtual Linux distros (e.g. CentOS Linux 8.1), by containerization of those
apps and or other Linux distros.

(B) Prerequisites:

- Stable internet connection                                 [OK]
- Properly installed of the Ubuntu Linux Desktop LTS 18.04.4 [OK]
- Properly installed Docker Engine - Community for Ubuntu    [KO]

(C) Get Docker Engine – Community for Ubuntu:

- OS requirements => Ubuntu Linux Desktop 18.04.4 LTS [Bionic Beaver]
- Docker Engine - Community is supported on x86_64

(D) Uninstall old versions

Older versions of Docker were called docker, docker.io, or docker-engine.
If these are installed, uninstall them:

$ sudo apt-get remove docker docker-engine docker.io containerd runc

It’s OK if apt-get reports that none of these packages are installed.

The contents of /var/lib/docker/, including images, containers, volumes, and networks, are preserved. 
The Docker Engine - Community package is now called docker-ce.

Supported storage drivers:

Docker Engine - Community on Ubuntu supports overlay2, aufs and btrfs storage drivers.

In Docker Engine - Enterprise, btrfs is only supported on SLES. See the documentation on btrfs for more details.

(E) Install Docker Engine – Community

You can install Docker Engine - Community in different ways, depending on your needs:

Most users set up Docker’s repositories and install from them, for ease of installation and upgrade tasks.
This is the recommended approach.    

Install using the repository

Before you install Docker Engine - Community for the first time on a new host machine,
you need to set up the Docker repository. Afterward, you can install and update Docker from the repository.

1. Update the apt package index:

   $ sudo apt-get update

   hasilnya:


fajar@fajar-Lenovo-ideapad-320-14AST:~$ sudo apt-get update
Ign:1 http://dl.google.com/linux/chrome/deb stable InRelease                   
Hit:2 http://dl.google.com/linux/chrome/deb stable Release                     
Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88,7 kB]    
Hit:5 http://id.archive.ubuntu.com/ubuntu bionic InRelease                     
Get:6 http://id.archive.ubuntu.com/ubuntu bionic-updates InRelease [88,7 kB]   
Hit:7 https://download.docker.com/linux/ubuntu bionic InRelease                
Get:8 http://id.archive.ubuntu.com/ubuntu bionic-backports InRelease [74,6 kB] 
Fetched 252 kB in 8s (32,4 kB/s)                                               
Reading package lists... Done

2. Install packages to allow apt to user repository over https:

   $ sudo apt-get install \
       apt-transport-https \
       ca-certificates \
       curl \
       gnupg-agent \
       software-properties-common

    atau

   $ sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

 3. Add Docker's official GPG key:

    $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 

    Verify that you now have the key with the fingerprint:

    9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88

    by searching...for the last 8 characters of the fingerprint.

    $ sudo apt-key fingerprint 0EBFCD88


pub   rsa4096 2017-02-22 [SCEA]
    9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
uid           [ unknown] Docker Release (CE deb) <docker@docker.com>
sub   rsa4096 2017-02-22 [S]

(F) INSTALL DOCKER ENGINE – COMMUNITY

1. Update the apt package index.

   $ sudo apt-get update

2. Install the latest version of Docker Engine - Community and containerd, or go to the next step to install a spesific
   version:

   $ sudo apt-get install docker-ce docker-ce-cli containerd.io

3. Verify that Docker Engine - Community is installed correctly by running the hello-wolrd image.

   $ sudo docker run hello-world


   [sudo] password for fajar: 

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

  1. The Docker client contacted the Docker daemon.
  2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
    (amd64)
  3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
  4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

This command downloads a test image and runs it in a container. When the container runs, it prints an informational message and exits.

Docker Engine – Community is installed and running. The docker group is created but no users are added to it. You need to use sudo to run Docker commands. Continue to Linux postinstall to allow non-privileged users to run Docker commands and for other optional configuration steps.

Post-installation steps for Linux

This section contains optional procedures for configuring Linux hosts to work better with Docker.

Manage Docker as a non-root user

The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root,
and, other users can only access it using sudo. The Docker daemon always runs as the root user.

If you don’t want to preface the docker command with sudo, then create a Unix group called docker and add user to it.
When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.

Steps to create the docker group and add your user:

1. Create the docker group

       $ sudo groupadd docker

    2. Add your user to the docker group

       $ sudo usermod -aG docker $USER

3. Log out and log back in so that your group membership is re-evaluated

       If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect.

       On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.

       On Linux, you can also run the following command to activate the changes to groups:

       $ newgrp docker

4. Verify that you can run docker command without sudo

   $ docker run hello-world

$ docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

  1. The Docker client contacted the Docker daemon.
  2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
    (amd64)
  3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
  4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

Meninggalkan komentar

Bagaimana Membuat Pengguna Root dan Bagaimana Masuk Sebagai Pengguna Root di Sistem Operasi Linux

Pada posisi di prompt $_

[tanda kursor aktif kita yg sedang berkedip-kedip di depan tanda $ (dollar sign) ]

Ketikkan sebagai berikut:

$ sudo passwd root

[sudo] password for yourname:

(masukkan user password biasa Anda. Jika input benar, maka akan muncul untuk memasukkan password root Anda)

Enter new UNIX password:

Retype new UNIX password:

Passwd: password updated successfully

$ sudo su

[sudo] password for yourname:

root@fajar-:/home/fajar#

Meninggalkan komentar

What Is UEFI

Definition

UEFI is the abbreviation of Unified Extensible Firmware Interface, which is a firmware interface for computers and it works as a “middleman” to connect a computer’s firmware to its operating system. It is used to initialize the hardware components and start the operating system stored on the hard disk drive when the computer starts up.

UEFI possesses many new features and advantages that cannot be achieved through the traditional BIOS and it is aimed to completely replace the BIOS in the future.

UEFI stores all the information about initialization and startup in a .efi file, a file stored on a special partition called EFI System Partition (ESP). The ESP partition will also contain the boot loader programs for the operating system installed on the computer.

It is because of this partition, UEFI can directly boot the operating system and save the BIOS self-test process, which is an important reason for UEFI faster booting.

Note: Some computer users use UEFI boot but still refer to it as the “BIOS”, which may confuse some people. Even if your PC uses the term “BIOS”, most modern PCs you buy today use UEFI firmware instead of a BIOS. To distinguish UEFI and BIOS, some also call UEFI firmware as UEFI BIOS, and BIOS is called Legacy BIOS or traditional BIOS.

Meninggalkan komentar

From Monolith to Microservices

Introduction

Most new companies today run their business processes in the cloud. Newer startups and enterprises which realized early enough the direction technology was headed developed their applications for the cloud.

Not all companies were so fortunate. Some built their success decades ago on top of legacy technologies – monolithic applications with all components tightly coupled and almost impossible to separate, a nightmare to manage and deployed on super expensive hardware.

If working for an organization which refers to their main business application “the black box”, where nobody knows what happens inside and most logic was never documented, leaving everyone clueless as to what and how things happen from the moment a request enters the application until a response comes out, and you are tasked to convert this business application into a cloud-ready set of applications, then you may be in for a very long and bumpy ride.

By the end of this article, you should be able to:

  • Explain what a monolith is.
  • Discuss the monolith’s challenges in the cloud.
  • Explain the concept of microservices.
  • Discuss microservices advantages in the cloud.
  • Describe the transformation path from a monolith to microservices.

The Legacy Monolith

Although most enterprises believe that the cloud will be the new home for legacy apps, not all legacy apps are a fit for the cloud, at least not yet.

Moving an application to the cloud should be as easy as walking on the beach and collecting pebbles in a bucket and easily carry them wherever needed. A 1000-ton boulder, on the other hand, is not easy to carry at all. This boulder represents the monolith application – sedimented layers of features and redundant logic translated into thousands of lines of code, written in a single, not so modern programming language, based on outdated software architecture patterns and principles.

In time, the new features and improvements added to code complexity, making development more challenging – loading, compiling, and building times increase with every new update. However, there is some ease in administration as the application is running on a single server, ideally a Virtual Machine or a Mainframe.

A monolith has a rather expensive taste in hardware. Being a large, single piece of software which continuously grows, it has to run on a single system which has to satisfy its compute, memory, storage, and networking requirements. The hardware of such capacity is both complex and pricey.

Since the entire monolith application runs as a single process, the scaling of individual features of the monolith is almost impossible. It internally supports a hardcoded number of connections and operations. However, scaling the entire application means to manually deploy a new instance of the monolith on another server, typically behind a load balancing appliance – another pricey solution.

During upgrades, patches or migrations of the monolith application – downtimes occur and maintenance windows have to be planned as disruptions in service are expected to impact clients. While there are solutions to minimize downtimes to customers by setting up monolith applications in a highly available active/passive configuration, it may still be challenging for system engineers to keep all systems at the same patch level.

The Modern Microservice

Pebbles, as opposed to the 1000-ton boulder, are much easier to handle. They are carved out of the monolith, separated from one another, becoming distributed components each described by a set of specific characteristics. Once weighed all together, the pebbles make up the weight of the entire boulder. These pebbles represent loosely coupled microservices, each performing a specific business function. All the functions grouped together form the overall functionality of the original monolithic application. Pebbles are easy to select and group together based on color, size, shape, and require minimal effort to relocate when needed. Try relocating the 1000-ton boulder, effortlessly.

Microservices can be deployed individually on separate servers provisioned with fewer resources – only what is required by each service and the host system itself.

Microservices-based architecture is aligned with Event-driven Architecture and Service-Oriented Architecture (SOA) principles, where complex applications are composed of small independent processes which communicate with each other through APIs over a network. APIs allow access by other internal services of the same application or external, third-party services and applications.

Each microservice is developed and written in a modern programming language, selected to be the best suitable for the type of service and its business function. This offers a great deal of flexibility when matching microservices with specific hardware when required, allowing deployments on inexpensive commodity hardware.

Although the distributed nature of microservices adds complexity to the architecture, one of the greatest benefits of microservices is scalability. With the overall application becoming modular, each microservice can be scaled individually, either manually or automated through demand-based autoscaling.

Seamless upgrades and patching processes are other benefits of microservices architecture. There is virtually no downtime and no service disruption to clients because upgrades are rolled out seamlessly – one service at a time, rather than having to re-compile, re-build and re-start an entire monolithic application. As a result, businesses are able to develop and roll-out new features and updates a lot faster, in an agile approach, having separate teams focusing on separate features, thus being more productive and cost-effective.

Refactoring

Newer, more modern enterprises possess the knowledge and technology to build cloud-native applications that power their business.

Unfortunately, that is not the case for established enterprises running on legacy monolithic applications. Some have tried to run monoliths as microservices, and as one would expect, it did not work very well. The lessons learned were that a monolithic size multi-process application cannot run as a microservice and that other options had to be explored. The next natural step in the path of the monolith to microservices transition was refactoring. However, migrating a decades-old application to the cloud through refactoring poses serious challenges and the enterprise faces the refactoring approach dilemma: a “Big-bang” approach or an incremental refactoring.

A so-called “Big-bang” approach focuses all efforts with the refactoring of the monolith, postponing the development and implementation of any new features – essentially delaying progress and possibly, in the process, even breaking the core of the business, the monolith.

An incremental refactoring approach guarantees that new features are developed and implemented as modern microservices which are able to communicate with the monolith through APIs, without appending to the monolith’s code. In the meantime, features are refactored out of the monolith which slowly fades away while all, or most its functionality is modernized into microservices. This incremental approach offers a gradual transition from a legacy monolith to modern microservices architecture and allows for phased migration of application features into the cloud.

Once an enterprise chose the refactoring path, there are other considerations in the process. Which business components to separate from the monolith to become distributed microservices, how to decouple the databases from the application to separate data complexity from application logic, and how to test the new microservices and their dependencies, are just a few of the decisions an enterprise is faced with during refactoring.

The refactoring phase slowly transforms the monolith into a cloud-native application which takes full advantage of cloud features, by coding in new programming languages and applying modern architectural patterns. Through refactoring, a legacy monolith application receives a second chance at life – to live on as a modular system adapted to fully integrate with today’s fast-paced cloud automation tools and services.

Challenges

The refactoring path from a monolith to microservices is not smooth and without challenges. Not all monoliths are perfect candidates for refactoring, while some may not even “survive” such a modernization phase. When deciding whether a monolith is a possible candidate for refactoring, there are many possible issues to consider.

When considering a legacy Mainframe based system, written in older programming languages – Cobol or Assembler, it may be more economical to just re-build it from the ground up as a cloud-native application. A poorly designed legacy application should be re-designed and re-built from scratch following modern architectural patterns for microservices and even containers. Applications tightly coupled with data stores are also poor candidates for refactoring.

Once the monolith survived the refactoring phase, the next challenge is to design mechanisms or find suitable tools to keep alive all the decoupled modules to ensure application resiliency as a whole. 

Chosing runtimes may be another challenge. If deploying many modules on a single physical or virtual server, chances are that different libraries and runtime environment may conflict with one another causing errors and failures. This forces deployments of single modules per servers in order to separate their dependencies – not an economical way of resource management, and no real segregation of libraries and runtimes, as each server also has an underlying Operating System running with its libraries, thus consuming server resources – at times the OS consuming more resources than the application module itself.

Ultimately application containers came along, providing encapsulated lightweight runtime environments for application modules. Containers promised consistent software environments for developers, testers, all the way from Development to Production. Wide support of containers ensured application portability from physical bare-metal to Virtual Machines, but this time with multiple applications deployed on the very same server, each running in their own execution environments isolated from one another, thus avoiding conflicts, errors, and failures. Other features of containerized application environments are higher server utilization, individual module scalability, flexibility, interoperability and easy integration with automation tools.

Success Stories

Although a challenging process, moving from monoliths to microservices is a rewarding journey especially once a business starts to see growth and success delivered by a refactored application system. Below we are listing only a handful of the success stories of companies which rose to the challenge to modernize their monolith business applications. A detailed list of success stories is available at the Kubernetes website: Kubernetes User Case Studies.

  • AppDirect – an end-to-end commerce platform provider, started from a complex monolith application and through refactoring was able to retain limited functionality monoliths receiving very few commits, but all new features implemented as containerized microservices.
  • box – a cloud storage solutions provider, started from a complex monolith architecture and through refactoring was able to decompose it into microservices.
  • Crowdfire – a content management solutions provider, successfully broke down their initial monolith into microservices.
  • GolfNow – a technology and services provider, decided to break their monoliths apart into containerized microservices.
  • Pinterest – a social media services provider, started the refactoring process by first migrating their monolith API.

Although a challenging process, moving from monoliths to microservices is a rewarding journey especially once a business starts to see growth and success delivered by a refactored application system. Below we are listing only a handful of the success stories of companies which rose to the challenge to modernize their monolith business applications. A detailed list of success stories is available at the Kubernetes website: Kubernetes User Case Studies.

  • AppDirect – an end-to-end commerce platform provider, started from a complex monolith application and through refactoring was able to retain limited functionality monoliths receiving very few commits, but all new features implemented as containerized microservices.
  • box – a cloud storage solutions provider, started from a complex monolith architecture and through refactoring was able to decompose it into microservices.
  • Crowdfire – a content management solutions provider, successfully broke down their initial monolith into microservices.
  • GolfNow – a technology and services provider, decided to break their monoliths apart into containerized microservices.
  • Pinterest – a social media services provider, started the refactoring process by first migrating their monolith API.

What Are Containers?

Containers are application-centric methods to deliver high-performing, scalable applications on any infrastructure of your choice. Containers are best suited to deliver microservices by providing portable, isolated virtual environments for applications to run without interference from other running applications.

Microservices are lightweight applications written in various modern programming languages, with specific dependencies, libraries and environmental requirements. To ensure that an application has everything it needs to run successfully it is packaged together with its dependencies.

Containers encapsulate microservices and their dependencies but do not run them directly. Containers run container images.

A container image bundles the application along with its runtime and dependencies, and a container is deployed from the container image offering an isolated executable environment for the application. Containers can be deployed from a specific image on many platforms, such as workstations, Virtual Machines, public cloud, etc.

What Is Container Orchestration?

In Development (Dev) environments, running containers on a single host for development and testing of applications may be an option. However, when migrating to Quality Assurance (QA) and Production (Prod) environments, that is no longer a viable option because the applications and services need to meet specific requirements:

  • Fault-tolerance
  • On-demand scalability
  • Optimal resource usage
  • Auto-discovery to automatically discover and communicate with each other
  • Accessibility from the outside world
  • Seamless updates/rollbacks without any downtime.

Container orchestrators are tools which group systems together to form clusters where containers’ deployment and management is automated at scale while meeting the requirements mentioned above.

Container Orchestrators

With enterprises containerizing their applications and moving them to the cloud, there is a growing demand for container orchestration solutions. While there are many solutions available, some are mere re-distributions of well-established container orchestration tools, enriched with features and, sometimes, with certain limitations in flexibility.

Although not exhaustive, the list below provides a few different container orchestration tools and services available today:

Why Use Container Orchestrators?

Although we can manually maintain a couple of containers or write scripts for dozens of containers, orchestrators make things much easier for operators especially when it comes to managing hundreds and thousands of containers running on a global infrastructure.

Most container orchestrators can:

  • Group hosts together while creating a cluster
  • Schedule containers to run on hosts in the cluster based on resources availability
  • Enable containers in a cluster to communicate with each other regardless of the host they are deployed to in the cluster
  • Bind containers and storage resources
  • Group sets of similar containers and bind them to load-balancing constructs to simplify access to containerized applications by creating a level of abstraction between the containers and the user
  • Manage and optimize resource usage
  • Allow for implementation of policies to secure access to applications running inside containers.

With all these configurable yet flexible features, container orchestrators are an obvious choice when it comes to managing containerised applications at scale. In this course, we will explore Kubernetes, one of the most in-demand container orchestration tools available today.

Where to Deploy Container Orchestrators?

Most container orchestrators can be deployed on the infrastructure of our choice – on bare metal, Virtual Machines, on-premise, or the public cloud. Kubernetes, for example, can be deployed on a workstation, with or without a local hypervisor such as Oracle VirtualBox, inside a company’s data center, in the cloud on AWS Elastic Compute Cloud (EC2) instances, Google Compute Engine (GCE) VMs, DigitalOcean Droplets, OpenStack, etc.

There are turnkey solutions which allow Kubernetes clusters to be installed, with only a few commands, on top of cloud Infrastructures-as-a-Service, such as GCE, AWS EC2, Docker Enterprise, IBM Cloud, Rancher, VMware, Pivotal, and multi-cloud solutions through IBM Cloud Private and StackPointCloud.

Last but not least, there is the managed container orchestration as-a-Service, more specifically the managed Kubernetes as-a-Service solution, offered and hosted by the major cloud providers, such as Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (Amazon EKS), Azure Kubernetes Service (AKS), IBM Cloud Kubernetes Service, DigitalOcean Kubernetes, Oracle Container Engine for Kubernetes, etc. These shall be explored in one of the later chapters.

According to the Kubernetes website,

“Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.”

Kubernetes comes from the Greek word κυβερνήτης, which means helmsman or ship pilot. With this analogy in mind, we can think of Kubernetes as the pilot on a ship of containers.

Kubernetes is also referred to as k8s, as there are 8 characters between k and s.

Kubernetes is highly inspired by the Google Borg system, a container orchestrator for its global operations for more than a decade. It is an open source project written in the Go language and licensed under the Apache License, Version 2.0.

Kubernetes was started by Google and, with its v1.0 release in July 2015, Google donated it to the Cloud Native Computing Foundation (CNCF). We will talk more about CNCF later in this chapter.

New Kubernetes versions are released in 3 months cycles. The current stable version is 1.14 (as of May 2019).

Kubernetes Features I

Kubernetes offers a very rich set of features for container orchestration. Some of its fully supported features are:

  • Automatic bin packing
    Kubernetes automatically schedules containers based on resource needs and constraints, to maximize utilization without sacrificing availability.
  • Self-healing
    Kubernetes automatically replaces and reschedules containers from failed nodes. It kills and restarts containers unresponsive to health checks, based on existing rules/policy. It also prevents traffic from being routed to unresponsive containers.
  • Horizontal scaling
    With Kubernetes applications are scaled manually or automatically based on CPU or custom metrics utilization.
  • Service discovery and Load balancing
    Containers receive their own IP addresses from Kubernetes, white it assigns a single Domain Name System (DNS) name to a set of containers to aid in load-balancing requests across the containers of the set.

Kubernetes Features II

Some other fully supported Kubernetes features are:

  • Automated rollouts and rollbacks
    Kubernetes seamlessly rolls out and rolls back application updates and configuration changes, constantly monitoring the application’s health to prevent any downtime.
  • Secret and configuration management
    Kubernetes manages secrets and configuration details for an application separately from the container image, in order to avoid a re-build of the respective image. Secrets consist of confidential information passed to the application without revealing the sensitive content to the stack configuration, like on GitHub.
  • Storage orchestration
    Kubernetes automatically mounts software-defined storage (SDS) solutions to containers from local storage, external cloud providers, or network storage systems.
  • Batch execution
    Kubernetes supports batch execution, long-running jobs, and replaces failed containers.

There are many other features besides the ones we just mentioned, and they are currently in alpha/beta phase. They will add great value to any Kubernetes deployment once they become stable features. For example, support for role-based access control (RBAC) is stable as of the Kubernetes 1.8 release.

Why Use Kubernetes?

In addition to its fully-supported features, Kubernetes is also portable and extensible. It can be deployed in many environments such as local or remote Virtual Machines, bare metal, or in public/private/hybrid/multi-cloud setups. It supports and it is supported by many 3rd party open source tools which enhance Kubernetes’ capabilities and provide a feature-rich experience to its users.

Kubernetes’ architecture is modular and pluggable. Not only that it orchestrates modular, decoupled microservices type applications, but also its architecture follows decoupled microservices patterns. Kubernetes’ functionality can be extended by writing custom resources, operators, custom APIs, scheduling rules or plugins.

For a successful open source project, the community is as important as having great code. Kubernetes is supported by a thriving community across the world. It has more than 2,000 contributors, who, over time, have pushed over 77,000 commits. There are meet-up groups in different cities and countries which meet regularly to discuss Kubernetes and its ecosystem. There are Special Interest Groups (SIGs), which focus on special topics, such as scaling, bare metal, networking, etc. We will talk more about them in our last chapter, Kubernetes Communities.

Kubernetes Users

With just a few years since its debut, many enterprises of various sizes run their workloads using Kubernetes. It is a solution for workload management in banking, education, finance and investments, gaming, information technology, media and streaming, online retail, ridesharing, telecommunications, and many other industries. There are numerous user case studies and success stories on the Kubernetes website:

Kubernetes Architecture

At a very high level, Kubernetes has the following main components:

  • One or more master nodes
  • One or more worker nodes
  • Distributed key-value store, such as etcd.

Meninggalkan komentar

What is cache memory?

Cache memory, also called CPU memory, is high-speed static random access memory (SRAM) that a computer microprocessor can access more quickly than it can access regular random access memory (RAM). This memory is typically integrated directly into the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU. The purpose of cache memory is to store program instructions and data that are used repeatedly in the operation of programs or information that the CPU is likely to need next. The computer processor can access this information quickly from the cache rather than having to get it from computer’s main memory. Fast access to these instructions increases the overall speed of the program.

As the microprocessor processes data, it looks first in the cache memory. If it finds the instructions or data it’s looking for there from a previous reading of data, it does not have to perform a more time-consuming reading of data from larger main memory or other data storage devices. Cache memory is responsible for speeding up computer operations and processing.

Once they have been opened and operated for a time, most programs use few of a computer’s resources. That’s because frequently re-referenced instructions tend to be cached. This is why system performance measurements for computers with slower processors but larger caches can be faster than those for computers with faster processors but less cache space.
Multi-tier or multilevel caching has become popular in server and desktop architectures, with different levels providing greater efficiency through managed tiering. Simply put, the less frequently certain data or instructions are accessed, the lower down the cache level the data or instructions are written.

Implementation and history

Mainframes used an early version of cache memory, but the technology as it is known today began to be developed with the advent of microcomputers. With early PCs, processor performance increased much faster than memory performance, and memory became a bottleneck, slowing systems.

In the 1980s, the idea took hold that a small amount of more expensive, faster SRAM could be used to improve the performance of the less expensive, slower main memory. Initially, the memory cache was separate from the system processor and not always included in the chipset. Early PCs typically had from 16 KB to 128 KB of cache memory.

With 486 processors, Intel added 8 KB of memory to the CPU as Level 1 (L1) memory. As much as 256 KB of external Level 2 (L2) cache memory was used in these systems. Pentium processors saw the external cache memory double again to 512 KB on the high end. They also split the internal cache memory into two caches: one for instructions and the other for data.

Processors based on Intel’s P6 microarchitecture, introduced in 1995, were the first to incorporate L2 cache memory into the CPU and enable all of a system’s cache memory to run at the same clock speed as the processor. Prior to the P6, L2 memory external to the CPU was accessed at a much slower clock speed than the rate at which the processor ran, and slowed system performance considerably.

Early memory cache controllers used a write-through cache architecture, where data written into cache was also immediately updated in RAM. This approached minimized data loss, but also slowed operations. With later 486-based PCs, the write-back cache architecture was developed, where RAM isn’t updated immediately. Instead, data is stored on cache and RAM is updated only at specific intervals or under certain circumstances where data is missing or old.

Cache memory mapping

Caching configurations continue to evolve, but cache memory traditionally works under three different configurations:

  • Direct mapped cache has each block mapped to exactly one cache memory location. Conceptually, direct mapped cache is like rows in a table with three columns: the data block or cache line that contains the actual data fetched and stored, a tag with all or part of the address of the data that was fetched, and a flag bit that shows the presence in the row entry of a valid bit of data.
  • Fully associative cache mapping is similar to direct mapping in structure but allows a block to be mapped to any cache location rather than to a prespecified cache memory location as is the case with direct mapping.
  • Set associative cache mapping can be viewed as a compromise between direct mapping and fully associative mapping in which each block is mapped to a subset of cache locations. It is sometimes called N-way set associative mapping, which provides for a location in main memory to be cached to any of “N” locations in the L1 cache.

Format of the cache hierarchy

Cache memory is fast and expensive. Traditionally, it is categorized as “levels” that describe its closeness and accessibility to the microprocessor.

cache memory diagram

L1 cache, or primary cache, is extremely fast but relatively small, and is usually embedded in the processor chip as CPU cache.

L2 cache, or secondary cache, is often more capacious than L1. L2 cache may be embedded on the CPU, or it can be on a separate chip or coprocessor and have a high-speed alternative system bus connecting the cache and CPU. That way it doesn’t get slowed by traffic on the main system bus.

Level 3 (L3) cache is specialized memory developed to improve the performance of L1 and L2. L1 or L2 can be significantly faster than L3, though L3 is usually double the speed of RAM. With multicore processors, each core can have dedicated L1 and L2 cache, but they can share an L3 cache. If an L3 cache references an instruction, it is usually elevated to a higher level of cache.

In the past, L1, L2 and L3 caches have been created using combined processor and motherboard components. Recently, the trend has been toward consolidating all three levels of memory caching on the CPU itself. That’s why the primary means for increasing cache size has begun to shift from the acquisition of a specific motherboard with different chipsets and bus architectures to buying a CPU with the right amount of integrated L1, L2 and L3 cache.

Contrary to popular belief, implementing flash or more dynamic RAM (DRAM) on a system won’t increase cache memory. This can be confusing since the terms memory caching (hard disk buffering) and cache memory are often used interchangeably. Memory caching, using DRAM or flash to buffer disk reads, is meant to improve storage I/O by caching data that is frequently referenced in a buffer ahead of slower magnetic disk or tape. Cache memory, on the other hand, provides read buffering for the CPU.

Specialization and functionality

In addition to instruction and data caches, other caches are designed to provide specialized system functions. According to some definitions, the L3 cache’s shared design makes it a specialized cache. Other definitions keep instruction caching and data caching separate, and refer to each as a specialized cache.

Translation lookaside buffers (TLBs) are also specialized memory caches whose function is to record virtual address to physical address translations.

Still other caches are not, technically speaking, memory caches at all. Disk caches, for instance, can use RAM or flash memory to provide data caching similar to what memory caches do with CPU instructions. If data is frequently accessed from disk, it is cached into DRAM or flash-based silicon storage technology for faster access time and response.

Meninggalkan komentar

Corporate, Operations and Human Resources Management – an Introduction

Corporate, Operations and Human Resources Management

Module 1 : Corporate Management

This module introduces the learners to the world of corporate management. The lessons go through the characteristics of large businesses, management functions within a business, and the corporate business environment. This also covers large business structures, management style, and approaches to managing change in a large, dynamic organisation. The course is a useful introduction to those who wish to learn and understand more about how large businesses and organisations operate.

  • Corporate Management – Learning Outcomes
  • Managing large scale organisations
  • Evaluating organisational performance
  • Management structures and objectives
  • Management roles
  • Policy development
  • Management styles
  • Management skills and competencies
  • Change management – triggers for change
  • Change management – effects of change
  • Change management – implementation and resistance
  • Corporate management – lesson summary

 

Characteristics of large scale organisations

So what is large-scale organisation?

  • How many employees does the enterprise employ?
  • ( More than 100 employees )
  • What is the annual turnover/revenue of the enterprise?
  • ( Turnover/revenue is in the hundreds of thousands or millions of dollars )
  • What is the value of the enterprise’s assets?
  • ( The enterprise assets are similar value to turnover )
  • How many locations does the enterprise operate from?
  • ( The enterprise operates from a number of intrastate, interstate and overseas revenue)
  • How many owners are there of the enterprise?
  • (There are many owners or the nation owns it as a government enterprise)
  • What is the relationship between the owners and management of the enterprise?
  • (There is a clear distinction and separation between owners and managers)

 

Distinguishing large-scale organisations  

There are many ways that large-scale organisations can be classified and a variety of forms which they may take. The main way of distinguishing between large-scale organisations is by the ownership of the entity and its principal form of operation.

Hence, we are able to distinguish between government (public sector) and non-government (private sector) organisations or, put in another way, between publically owned and privately owned organisations. By ‘publicly owned’ we mean that the community as a whole owns the organisation and that it is operated on their behalf by the government. Public sector or government organisations may in turn be categorised into three distinct forms – general government entities, providing non-market goods and services (e.g. roads, hospitals and the like), public trading enterprises, providing market goods and services which meet their community service obligation and finally, public financial enterprises providing financial services e.g. government, banks and insurance offices.

Next, we can distinguish between those large-scale organisations which have as their primary or core objective, the ‘profit motive’, and those which are nonprofit oriented. Also, we can distinguish between large-scale organisations according to the industry to which they belong – primary, secondary or tertiary, or as to whether their core function is manufacturing or service provision.

 

We could also distinguish between organisations according to their legal status and the extent of their legal liability (e.g. sole trader, partnership, company, statutory authority, government department, and those large-scale organisations which have limited and those which have unlimited liability), and their size in terms of the number of employees, production levels and turnover.

 

 

POLC  CCM

There is a wide range of essential functions that must be performed by managers in large-scale organisations. These functions may be categorised into two broad types – generic functions and specific functions.

The generic management functions which all managers perform to some extent include the ‘POLC CCM’ functions:-

P –  planning

O –  organizing

L –  leading

C – controlling

 

C –  communicating

C –  creating

M – motivating

 

Expanations :

Planning : managers must perform the task of planning at their designated level ( the strategic ; tactical ; or operational level ) everything that the organisation must do to achieve its objectives, i.e. I the long-term , the mid-term , and the short-term plans.

Organising : managers must ensure that all the necessary resources , i.e. the natural resources, the human resources, the capital resources,  and the entrepreuneurial or ‘street smart’ resources are available and are able to be used  to perform the required tasks or for the required purposes so that the service can be provided or the product manufactured.

  • Human Resources
  • Natural Resources
  • Entrepreneurial Resources
  • Capital Resources

Leading : managers must lead the way for employees, customers and competitors, they must be at the forefront of trends and fashions and lead by example in the workplace through their technical skills and competencies.

 

Controlling : managers must perform a supervisory and control function to ensure that work is performed to the optimal level and that the quality of service provision or product manufacture is at world’s best practice level.

 

Communicating : managers must keep everyone in the organisation informed of what is occurring within the organisation as well as members of the wider community.

 

Creating : managers must be able to create innovative ways to perform tasks and to market the organisation’s  products or services in order to enhance the organisation’s effectiveness and efficiency.

 

Motivating :  managers must be able to motivate staff to maintain them in the first instance and then to ensure that their performance is optimized both for their own benefit an also for the benefit of the organisation.

 

Specific management functions

The specific management functions a manager will perform are determined by the structure of the organisation and by the area of expertise that the manager specializes in within the organisation. Examples of these functions include:

 

  • Human Resource – managers are responsible for motivating employees to achieve their organizational objectives. These include recruitment, selection, appointments, induction, training and motivation.
  • Marketing and Public Relations – managers must ensure that the right products and services are produced in the right style, at the right time for the right consumers, and to satisfy all consumer complaints if the organisation is to be successful.
  • Banking and Finance – managers must ensure that the organisation has the necessary financial resources to achieve its objectives and then they must control these organisation finances.
  • General Administration – managers must ensure that all necessary paperwork, data entry and analyses are completed in the most efficient manner.
  • Distribution – managers must ensure that the products are delivered on time to customers and that delivery charges are kept to minimum.
  • Operations – managers must meet customer demand and organizational objectives. They must be able to plan and execute all phases of the manufacturing process or service provision.

 

Contributions of large-scale organisations to the economy

Large-scale organisations make a significant contribution to the economy as a whole and it is for this reason that governments take a special interest in the successful operation of these organisations

 

The main significant contributions that these organisations make include:

Employement

Any downturn in organisational performance which sees these organisations downsize the number of their employees or close part or all of their operations will have a significant impact on the community. Also, these organisations provide indirect employment through the many organisations from whom they purchase their components or parts.

 

Goods and Services

These organisations provide a significant number of important goods and services not only for the general public but for other organisations as well. In addition, these organisations undertake extensive research and development to extend the type and the range of products and services available, as well as to improve their quality and serviceability.

 

Revenue

These organisations are a major source of revenue for the government through the taxation system

 

Competition

These organisations provide a source of competition between themselves, which benefits the consumers and the community as a whole through lower pricing structures and better quality products or service.

 

Community welfare

Some of these organisations provide the necessary infrastructure or goods and services, which enable members of the community to achieve minimum levels of welfare. They may also have a community service obligation to provide certain goods and services to the general community at zero cost or at a cost which simply recovers expenses. Some of these organisations also sponsor certain activities and groups within the community and undertake research into ways to preserve our environment and our heritage.

 

Foster international relations

Many of these organisations are multinational corporations and as a result are de facto diplomats which represent their country in their international dealings and, as such, assist with government policy implementation.

 

Business environments

 

Large-scale organisations do not operate within a vacuum, they operate within constantly changing commercial and non-commercial environments. These organisations are not static, they are dynamic organisations operating within an open (as opposed to a closed) system. It is important for managers within these organisations to understand the environment within which they operate so that they can be proactive about any likely impacts that the environment may have on the organisation. In this way managers can make any necessary changes to ensure the continued success of the organisation and the attainment of its goals and objectives.

Large-scale organisations operate within essentially two broad environments – the internal environment and the external environment. The external environment may be further divided into the task environment and the general environment.

 

There are three major sectors or contributors to the internal environment which may impact on the organisation – management, employees and the culture of the organisations.

 

There are three major sectors or contributors to the internal environment which may impact on the organisation – management, employees and the culture of the organisations.

 

Management obviously has a significant impact on the way that the organisation operates and functions, through the management functions already discussed.

The employees may impact on the organisation through the tasks that they perform and the way that they execute those tasks.

The culture of the organisation is a significant impactor as it incorporates ‘the way that things are done within the organisation’. The culture of the organisation includes the ‘system of shared values’ inherent within the organisation, i.e. what people both within and outside the organisation believe the organisation stands for and how it operates, e.g. the quality of its service and the level of the organisation’s employee relations.

 

If the culture was to alter then the operations of the organisation would change as well. It is worth noting that the organisation has a fair degree of control over these sectors and the impact that they have on the organisation.

Meninggalkan komentar

Quantitative Methods for Business

RPF GLOBAL

Patrick Chua is the senior vice-president of RPF Global , a firm of financial consultants with offices in major cities around the Pacific Rim. He outlines his use of quantitative ideas – as follows :

” Most of my work is communicating with managers in companies and goverment offices. I am certainly not a mathematician, and i am often confused by figures – but certainly i use quantitative ideas all the time.When i talked to a board of directors, they won’t be impressed if i say , ” this project is quite good; if all goes well you should make a profit at some point at the future ” ; They want me to spell things out clearly and say , ” You can expect a 20% rate of return over the next 2 years”.

My clients look for a competitive advantage in a fast-moving world. They make difficult decisions. Quantitative methods help us to make better decisions – and they help to explain and communicate these decisions.

Quantitative methods allow us to :

a.  look logically and objectively at a problem;

b.  measure key variables and the results in calculations;

c.  analyse a problem and look for practical solutions;

d.  compare alternative solutions and identify the best;

e.  compare performance across different operations, companies and times;

f.  explain the options and alternatives;

g.  support or defend a particular decision;

h.  overcome subjective and biased opinions;

Quantitative methods are essential part of any business tools and techniques. Without them, we just could not survive.

(source: Chua P. talk to Eastern Business Forum , Hong Kong, 2010

taken from “Quantitative Methods for Business by Donald Waters, 4th Ed. ; Prentice Hall )

 

Meninggalkan komentar

Goals and Governance of the Firm

Corporations face two principal financial decisions. First, what investments should the corporation (/firm) make? Second, how should it pay for the investments? The first decision is the INVESTMENT decision; the second is the FINANCING decision.

The stockholders who own the corporation want it managers to maximise its overall value, and the current price of its shares. The stockholders can all agree on the goal of VALUE MAXIMIZATION, so long as financial markets give them the flexibility to manage their own savings and investment plans. Of course, the objective of wealth maximization does not justify unethical behavior. Shareholders do not want (an sich) maximum possible stock price.  They want the maximum honest share price.

How can financial managers increase the value of the firm? Mostly by making good investment decisions. Financing decisions can also add value, and they can surely destroy value if you screw them up. But it’s usually the profitability of corporate investments that separates value winners from the rest of the pack.

Investment decisions force a trade-off. The firm can either invest cash, or return it to shareholders, for example, as an extra dividend. When the firm invests cash rather than paying it out, shareholders forgo the opportunity to invest it for themselves in financial markets. The return that they are giving up is therefore called the opportunity cost of capital. If the firm’s investment can earn a return higher than the opportunity cost of capital, shareholders cheer and stock price increases. If the firm invests at a return lower than the opportunity cost of capital, shareholders boo and stock price falls.

Managers are not endowed with a special value-maximizing gene. They will consider their own personal interests, which creates a potential conflict of interest with outside shareholders. This conflict is called a principal-agent problem. Any loss of value that results is called an agency cost.

Corporate governance helps to align managers’ and shareholders’ interest, so that managers play close attention to the value of the firm. For example, managers are appointed by, and sometimes fired by, the board of directors, who are supposed to represent shareholders. The managers are spurred on by incentive schemes, such as grants of stock options, which payoff big only if the stock market price increases. If the company perform poorly, it is more likelyto be taken over. The takeover typically brings in a fresh management team.

Remember the following three themes, for you will see them again and again throughout this discussion :

(1) Maximizing value.

(2) The opportunity cost of capital.

(3) The crucial importance of incentives and governance

 

( Principle of Corporate Finance, 10th, Brealey, Myers, Allen; McGraw-Hill Irwin, 2011, Ch. 01)

 

 

 

Meninggalkan komentar