Monday, July 22, 2024

Amazon Cognito User Pools for CIAM

If you're looking for a CIAM solution, then Amazon Cognito is definitely worth a try. To get started quickly with Cognito user pools, user sign-up, sign-in, password recovery, plus an extra layer of security through PKCE, I wrote a basic guide

Cognito is surprisingly cost-effective. An estimate for 10,000 MAUs (monthly active users) can be as low as 0.015 per user per month. Not bad for a robust, secure, full-featured CIAM solution based on open standards. 

Friday, July 19, 2024

AWS EKS Cluster Deployment

If you plan to work with managed Kubernetes clusters, I wrote an eksctl guide to get started. It's an introduction to a command-line tool that automates K8s cluster creation in AWS. The guide takes you through the cloud infrastructure put in place through the tool, following a simple use case: create a cluster from a single command, without a config file.


Thursday, July 18, 2024

HAProxy Basics

If you're really into free, open-source load balancers, then you might enjoy a short guide I wrote for getting started with HAProxy. Set it up at your own pace on your local workstation to get a feel for making HAProxy work in a basic way. 

Even after all these years, HAProxy continues to be regarded as a leading standard in software load balancing.


Saturday, June 15, 2024

Service Mesh

Before service meshes, engineering teams on the cutting-edge of microservices architecture resorted to in-house developed components to address challenges they faced in the early days of microservices. As projects grew, growing pains emerged, and features for teams to consider included basic service discovery, security, traffic management, and observability. 

In a monolithic architecture, the encapsulation of cross-cutting concerns, or features that multiple components need to share, resolves similar problems. The absence of a shared feature in both architectures, microservices and monolithic, inevitably leads to a bigger problem: code-duplication. 

For example, in a monolithic architecture, logging and error-handling code may need to be duplicated across several modules. Similarly for microservices, each service may implement its own authentication and authorization logic, leading to code maintenance challenges and inconsistencies.

Problems to Solve

It doesn't take hundreds of services for this need to become apparent. Avoiding duplicate code becomes difficult in running fewer than twenty services. 


In fact, in a minimalist two-service environment, where service A calls service B, how does service A find service B? What if service B moves? If a call fails, then how many times should service A retry? How does service B know that service A is actually calling it? Should service B always accept requests from service A? If a new version of service B is released, then can service A remain compatible with the previous version? What happens if both services log and trace calls independently? 

Solutions

To solve most of these problems, service meshes introduce an infrastructure layer called a data plane in which sidecars (proxies) are paired with each other. Sidecars keep the cross-cutting concerns separate from the application code. They intercept all outbound and inbound traffic from services, and are guided by system-wide policy provided by a control plane


The control plane can also be thought of as a mesh coordinator. It keeps track of where service instances run and propagates configuration changes to the entire system. The control plane sends service discovery information to sidecars, enabling them to route traffic correctly. It also enables sidecars to establish a secure communication channel by issuing and rotating certificates. 

Service mesh architecture offers enormous advantages in managing and securing microservices. By abstracting network and security concerns into sidecars, service meshes like Istio provide centralized control, policy enforcement, and observability. 

A downside is in deploying and managing a service mesh. This could become somewhat complicated, as service meshes require upfront consideration of cloud resource utilization (memory, cpu, bandwidth). And teams should think through the specific needs of the application architecture to justify the added complexity. 

When implemented correctly, service meshes give engineering teams superpowers by improving overall system security, resilience, and scalability. As an added advantage, a service mesh simplifies application maintenance by helping teams to avoid code-duplication. 


Saturday, April 22, 2023

Threat Modeling

In a highly secure organization, threat modeling must be an integral part of a Secure Release Process (SRP) for cloud services and software. It is a security practice used to evaluate potential threats to a system or application. Organizations can adopt threat modeling to get ahead of vulnerabilities before it's too late. 

The practice involves 6 steps:

1. Identify assets: What needs to be protected? This may include data, hardware, software, networks, or any other resources that are critical to the organization.

2. Create a data flow diagram: How does data flow through a system or application? What components talk to each other and how? A data flow diagram shows component interactions, ports, and protocols. 

3. Identify potential threats: What threats to the system exist? External threats include hackers and malware, while internal threats may include authorized users and human error. What harm can they cause to assets? 

4. Risk: evaluate the likelihood and impact of threats. How serious is the threat and how likely is it to happen? What would be the impact to the organization? 

5. Prioritize threats: After a risk evaluation, prioritize threats by severity. This helps organizations to focus attention on addressing the most severe threats first.

6. Mitigate threats: The final step in threat modeling is to develop and implement measures to mitigate identified threats. This could include adding security controls, such as firewalls, intrusion detection/prevention systems, or a SIEM. Employee training on security best practices also goes a long way to mitigate threats. Regular tests and updates to security measures are other ways to mitigate security risks.

A review of a threat model should happen at least once a year as part of an SRP to catch new threats and to assess architecture changes to the system or application as it evolves. By identifying potential threats proactively, organizations can significantly reduce the risk of a cybersecurity attack.

Sample Threat Model

In this sample threat model, the focus is a PAM solution deployed deep inside the corporate network in a Blue Zone. In identifying the assets needing protection, the diagram expands out to include all inbound connections into the PAM solution, all outbound connections from it. 

Sample Threat Model

Network Zones

The sample diagram can be broken down by network zone: Blue, Yellow, and Red. 

Blue - Highly restricted. Contains mission critical data and systems operating within it. Here applications can talk to each other, but they shouldn't reach out to any of the other zones. If they do, traffic should be carefully monitored. 

Yellow - like a DMZ (Demilitarized Zone), this zone hosts a services layer of APIs and user interfaces that are exposed to authenticated/authorized users. It also hosts a SIEM to take logs in from external sources. 

Red - This zone is uncontrolled. It is completely un-trusted because of the limited controls that can be put into place there. As such, it's viewed as a major security risk. Sensitive assets inside the organization must be isolated as much as possible from this zone. Could be a customer's network or the big bad internet.

Assets

The A1-A18 labels identify and classify the assets that need to be protected. In this model assets to protect include logs, alert data, credentials, backups, device health metrics, key stores, as well as SIEM data. And since the focus of this threat model is the PAM solution itself, the primary asset are elevated customer device credentials, A4.  

Threats

The red labels T1-T5 represent threats to the applications and data inside each zone. In this model, threats do not include external unauthenticated users because the system is locked down to prevent this type of access. But it does include internal authenticated as well as internal unauthenticated users as a threat simply because human error could lead to a security incident. SQL Injection is also identified as a threat to the databases. 

Controls

To mitigate threats, controls and safeguards are put in place. In this model, a VPN sits in front of the internal network and VPN access is required to get in. All traffic is routed through an encrypted VPN tunnel. In the diagram this is the dotted line underneath the Red Zone labeled C1. Other controls include firewalls to allow traffic only through specific ports and protocols, as well as encryption of data in transit by SSL/TLS. 

An intrusion detection and/or intrusion prevention (IDS/IPS) tool is in place, labelled C7, to capture and analyze traffic. Anything suspicious generates an alert for security personnel to act upon. An IDS can be used to detect a wide range of attacks, including network scans, port scans, denial-of-service (DoS) attacks, malware infections, and unauthorized access attempts. Other controls throughout the diagram are placed there to protect assets from identified threats. 

Data Classification

The sample threat model does not have an extensive data classification scheme, as it only identifies sensitive data. But other models could provide a more granular data classification scheme to better explain what kind of data is stored where and how to protect it. 

For example, HIPPA (Health Insurance Portability and Accountability Act) protects the privacy and security of individuals' medical records and personal health information (PHI). The law applies to health care providers, health plans, and health care clearinghouses, as well as their business associates, who handle PHI. HIPAA requires covered entities to implement administrative, physical, and technical safeguards to protect the confidentiality, integrity, and availability of PHI. Non-compliance with HIPAA regulations can result in significant fines and legal penalties.

HIPAA, PCI-DSS, and GDPR mandate that organizations implement security measures to protect sensitive data. Threat modeling as a security practice helps organizations to comply with regulatory requirements. 

Once it's created, a threat model diagram can be reviewed by the organization's security team and kept current with architectural changes as the system evolves over time. The threat model provides a basis for a re-assessment of the threats and controls in place to protect assets. 

An improvement in an organization's security posture can be achieved by investing in threat modeling, and thereby reducing the risk of cyber-attacks. 

5-Service Cloud Architecture Model

A primary goal of cloud architecture is to provide a cloud computing environment that supports a flexible, scalable, and reliable platform for the delivery of cloud services. 

In terms of layers, cloud architecture may include infrastructure, platform, and software layers. These are often referred to as IaaS, PaaS, and SaaS. Infrastructure has the physical servers, storage devices, and networking that are required to support cloud services. Platform refers to the software frameworks and tools that are used to develop and deploy cloud applications. And software is all about the applications and services built on top of infrastructure and platform, that are provided to end-users. 

If we really boil it down to an essence, it's possible to define a cloud architecture comprised of 5 core elements:

1. Load Balancer
2. Microservices
3. System of Record 
4. System of Engagement
5. Messaging

1. Load Balancer

To distribute incoming network traffic evenly and to prevent overloading of any single resource, a Load Balancer provides high availability (HA) through fail-over and request distribution by an algorithm like round-robin. Rules are added here to send traffic to the nodes of a cluster uniformly, or based on their ability to respond. 

2. Microservices

Microservices allow APIs to be deployed fast and often. As a core platform layer, microservices provide a way for APIs to be combined when there's a need to correlate data from different sources. They isolate the lower layers of the platform from end-user applications and they support long term growth in development team size, plus the number of applications and services can grow here as well, in isolation from each other.  


3. System of Record

A System of Record (SOR) is the primary source of data used as the source of truth for a particular business process or application. A database, a file system, or any other software system that stores and manages data becomes a system of record. It is a unified view of critical business data, accurate and up-to-date. A source of truth has a disaster recovery plan as well as backups. 

4. System of Engagement

All user interaction with applications and services delivered by the platform happens through a System of Engagement (SOE), for users to login, search, and interact with the cloud services provided. To better engage with customers and stakeholders, the SOE provides them with personalized, interactive experiences that are tailored to their needs and preferences. 

5. Messaging

Finally, a messaging system is needed for service-to-service communication and coordination between applications. A  messaging system adds the facilities to create, send, receive, and to read messages. 

These elements could be selected from a cloud catalog, or installed into a cloud provider such as AWS, Azure, or GCP. For example, the following open-source software could be deployed into a cloud platform, as an implementation of the 5-service cloud architecture:

1. HA Proxy
2. Kubernetes
3. MySQL
4. Elastic Search (ELK)
5. Kafka

There's no limit to the number of cloud services that can be designed, implemented, deployed, operationalized, and exposed to users as cloud-native applications, using any 2 or more of these platform services. 

In defining a cloud architecture model, architects lay a foundation for future product development. Through a 5-service cloud architecture, we setup product teams, engineering teams, and infrastructure teams to design, build, and deliver an unlimited range of applications and services to users and businesses, built on top of a system of sub-systems. 

Disaster Recovery Plan

A disaster recovery (DR) plan provides a step-by-step procedure for unplanned incidents such as power outages, natural disasters, cyber attacks and other disruptive events. This DR plan is intended to minimize the impact of a disaster on a primary data center by defining a way for the system to continue to operate. A plan includes a procedure to quickly return to an operational state in a production environment.

A disruption to the operational state of the system in production can lead to lost revenue, financial penalties, brand damage, and/or dissatisfied customers. If the recovery time is long, then the adverse business impact of a disaster is greater. A good disaster recovery plan is intended to recover rapidly from a disruption, regardless of the cause.

This DR plan defines 4 basic elements:

  1. Response - A step-by-step procedure to perform in the event of a disaster that severely impacts the primary data center hosting the system in order to failover to a secondary site.
  2. Secondary Site (Backup) - A secondary, backup instance of the system (DR site) in support of business continuity in the event of a disaster.
  3. Data Replication - The data replication mechanism that keeps a secondary site in sync with a primary.
  4. Recovery - An approach to reconstitute the primary data center hosts after an assessment of the damage.

Disaster Recovery defines two primary objectives, Recovery Point Objective (RPO) and Recovery Time Objective (RTO):


Recovery Point Objective (RPO) - The maximum targeted period of time in which data or transactions might be lost from an IT service due to a major incident. For example, the time period elapsed during a data replication interval. 

Recovery Time Objective (RTO) - The targeted duration of time and a service level within which a business process must be restored after a disaster or disruption in order to avoid unacceptable consequences associated with a break in business continuity. For example, 24 hours to restore 95% of the service. 

Recovery

After damage to the primary site is assessed, a procedure to re-constitute the site to an operational state can be followed. A procedure is expected to be completed within 24 hours of a disaster. During this recovery period, a DR site is expected to provide business continuity, in some cases read-only, as user operations can be queued up, but not yet committed. 

Wednesday, April 19, 2023

GitHub Actions + AWS CodeDeploy

Let's sketch an architecture diagram of a solution, and describe a CI/CD pipeline, including build, test, pre-deployment and post-deployment actions, and tools that could be used to deploy this application to AWS. 

Solution

One approach is to add GitHub Actions to a blog-starter repository that contains the Node.js application source code, to define a CI/CD pipeline. We could re-deploy the blog-starter application onto an AWS EC2 Linux instance when source code changes are pushed to the GitHub repository. In this approach, AWS CodeDeploy services are integrated with GitHub and leveraged for this purpose. 


High Level Flow

  • Developer pushes a commit to a branch in the blog-starter GitHub repo. 
  • The push triggers GitHub actions that run AWS CodeDeploy 
  • The AWS CodeDeploy commands deploy the new commit to the EC2 instance that hosts the Node.js app
  • Hook scripts are invoked to run pre-installation, post-installation, and application start tasks. 

Architecture Sketch


Pipeline Stages

A. Test

Code quality tests can be implemented as GitHub pre-merge checks to run against the application source code. A GitHub pull request catches when a specific line in a commit causes a check to fail. This will display failure, warning, or notice next to the relevant code in the Files tab of the pull request. 

The idea here is to prevent a merge to the master branch until all code quality issues have been resolved. 

B. Pre-Deployment

Any dependencies that need to be installed on the Linux EC2 instance can be installed by a hook script that is defined in the CodeDeploy AppSpec file's hooks section. 

The CodeDeploy AppSpec file is placed in the blog-starter repository where the AWS CodeDeploy Agent can read it, for example under blog-starter/appspec.yml

C. Build

The blog-starter node application is built by running npm. This step is accomplished by another hook script that is defined in the CodeDeploy AppSpec file hooks section under ApplicationStart. 

D. Post-Deployment

Tasks that run after the application is installed, such as changing permissions on directories or log files, can also be defined in a hook script in the CodeDeploy AppSpec file under AfterInstall. 

AWS EC2 Instance

To host the application in AWS, an EC2 Linux instance can be defined and launched. Initial installation of node, npm, as well as the app by cloning the GitHub repository, can be done manually from the EC2 command-line to have these features up and running in the cloud. 

AWS CodeDeploy Agent

The installer for CodeDeploy agent can be downloaded onto the EC2 Linux instance from the command line to install, and then the agent can be started as a service. 

AWS CodeDeploy

Additional configuration is needed, for example, to create an AWS IAM Role and User that is authorized to run deployment commands through the CodeDeploy agent. 

GitHub Actions

A deploy.yml file can be added under .github/workflows that defines the CI/CD pipeline steps, or what do to after a push. For example, 1) checkout a branch and then 2) run a CodeDeploy deployment command.  

Further Reading


Infrastructure as Code (IaC)

IaC or Infrastructure as Code refers to the use of machine-readable definition files to manage and provision IT infrastructure, instead of manual configuration. The infrastructure is treated like software code that can be versioned, tested, and automated. This allows for faster and more reliable deployment of infrastructure and easier management and scaling of resources. The infrastructure is defined in code using programming languages or specialized tools. This code can be executed to create, modify, or delete infrastructure resources in a repeatable and consistent manner, thus reducing the risk of human error and increasing the efficiency of IT operations, resulting in overall system reliability.

There are many IaC frameworks available, including popular ones like Terraform, Ansible, Puppet, Chef, CloudFormation, and SaltStack. The choice of framework depends on factors such as the size and complexity of the infrastructure, the specific needs of the organization, and the skills and expertise of the IT team. For example, Terraform is known for its ability to provision infrastructure across multiple cloud providers and on-premises environments, while Ansible is popular for its simplicity and ease of use. Puppet and Chef focus on configuration management and enforcing consistency across infrastructure resources, while CloudFormation is specific to Amazon Web Services (AWS) environments. SaltStack offers an event-driven automation approach that can help with high-scale and complex infrastructures. Ultimately, the best IaC framework is the one that meets the needs of the organization and aligns with its IT strategy and goals.


Saturday, December 03, 2016

Mortgage Rates and Home Prices

Over the holiday weekend I had a nice conversation with a friend about interest rates and home prices. We speculated about what might happen to home prices if rates go up in the future, but neither of us could point to evidence that a relationship between the two variables actually exists. Afterwords, I decided to look for real data to understand the statistical relationship between rates and home prices. From a historical perspective, do rising or falling interest rates have an impact on home prices?

Get the Data

For historical home price data I went to the most widely recognized gauge of U.S. home prices, the S&P/Case-Shiller U.S. National Home Price Index, and found a data set for the monthly value of the index dating back to 1975. The St. Louis Fed publishes the raw Case-Shiller home price index data and makes it easy to download. For historical mortgage rate data, I found that Freddie Mac provides historic tables of monthly mortgage rates dating back to 1971, and their site links to the raw data in spreadsheets. I downloaded the history table for the 30 Year Fixed-Rate Mortgage and focused on 1975 to the present to line up the dates in the two data sets.

Formulate a Question

Before getting into the data, I asked a few basic questions: do rates affect home prices? If so, how? More specifically, are the two variables correlated in any way? What can we expect from home prices if rates go up?

Explore the Data

To get a better sense for the range of values in the data and where we are today, I looked for the mean, min, max, and current values as well as the years in which the top and bottom of the range of values in both variables occurred:

Interest Rates


From January 1975 to August 2016:

Mean 8.3
Min 3.3 (2012)
Max 18.4 (1981)
Current 3.4
Standard Deviation 3.2


Home Prices


From January 1975 to August 2016:

Mean 97.5
Min 25.2 (1975)
Max 184.6 (2006)
Current 184.4
Standard Deviation 48.3

The range of values in the data shows that the fixed-rate on a 30 year mortgage is almost as low today as it was at the very bottom of it’s recorded history in 2012. Also, the Case-Shiller Home Price Index is currently just .2 away from it’s highest point ever recorded since 1975, very close to the peak of the housing bubble in 2006. So we are near historical lows for rates and historical highs for prices. Or, as a statistician might put it: we are looking at the tail ends of the distribution.

Rates are currently one and a half standard deviations below the mean, while the Home Price Index is nearly two standard deviations above the mean.

Prepare the Data

I merged the two data sets into a single, 3-column table that has a row for every month of the year from January 1975 to August 2016:

D X Y
1/1/1975 9.43 25.25
2/1/1975 9.10 25.29
3/1/1975 8.89 25.36
...
6/1/2016 3.57 182.19
7/1/2016 3.44 183.43
8/1/2016 3.44 184.42

Where D is a date, X is the rate, and Y is the housing index value.

Regression Analysis

To see if rates affect prices, we plot Y, the housing index value, as a dependent variable and X, the rate, as an independent variable. Since we think X has an impact on Y, we can use R to do a simple linear regression analysis on the data.


In the plot, a regression line, the best-fitting line that minimizes the sum of the squares of the vertical distances from each data point to the line, explains the relationship between the two variables. It slopes down. When y-axis values are high, x-axis values tend to be low; when y-axis values are very low, x-axis values are very high. Clearly, there is a strong negative correlation between the two variables and the data visualization makes this plain to see. A strong negative correlation only suggests that high values for X are associated with low values for Y. This is demonstrated by 41 years of data analyzed here. But a strong negative correlation does not imply that X causes Y. In other words, a low fixed rate interest on a 30 year mortgage does not cause home prices to go up. That being said, it is not unreasonable to anticipate lower home prices as interest rates go up because, simply put, the regression line slopes down.

One area of the scatter-plot to highlight is the data point that is closest to the upper left-hand corner of the plot, in which prices are at all-time historical highs and rates are at all-time historical lows. This point is from 08/01/2016.


Friday, September 30, 2016

Why are we here?

Darwin:
  • Natural selection is about survival
  • Natural selection rewards tiny changes
  • Natural selection explains how but not why
  • We are here only long enough to pass on our genes
  • This is the reason we are here
Richard Dawkins provides an answer to this existential question from a scientific perspective. Dawkins argues that we appear to be breaking Darwin’s rules through technological progress. What does this mean? In Dawkins view, evolution explains how we came into the world with the basic goals to survive or reproduce. But we freed ourselves from spending all our time passing on our genes. The thing that freed us from our genes was also the result of natural selection: the human brain. Natural selection rewarded a genetic advantage. Our brains got bigger and evolved the ability to set goals. We evolved the capacity to seek, to strive, and to set up short term goals in support of long term ones. The brain also gained the capacity to ask why. We were no longer content with what nature told us to do. Language became a tool. We adopted purposeful behavior through the communication of goals that benefit more than an individual. We accelerated the pace of evolution through technology which is currently evolving millions of times faster than genetic evolution. We created a technological world that enabled us to move faster, alleviate hunger, and cure disease. We started living longer. We invented with purpose. There was no purposeful design in nature. Powered by our technical progress, we explored the universe. We looked across the vacuum of space, backward in time to the birth of the universe. At the other extreme we looked at sub-atomic particles. We dissected the living cell and unraveled the digital code of genes. We hacked ourselves.

Dawkins explains that we provide the purpose in a universe that would otherwise have none. We are in charge. Why we are here resides in us.

Thursday, September 29, 2016

The Emotion Machine: Consciousness

The word consciousness troubles Minsky in Chapter 4, The Emotion Machine, as do other words that are often used to describe what's going on inside our brains. Minsky calls them “suitcase words”. That is, words that have been around for centuries and carry too much meaning, like intelligence or cognition. These words point to multiple levels of mental activities, but too often over-simplify rather than explain. Suitcase words need to be unpacked. Chapter 4 argues that suitcase words may also preserve outdated concepts. Long ago, it was thought that a “vital force” explained life in living organisms. A vital force simply infused with the body of an organism to give it life. This belief was widely held before biology explained life as a massive collection of different processes that go on inside cells and membranes replete with intricate biological machinery. Consciousness, Minsky argues, doesn’t explain what happens inside the brain any more than the vital force explains what happens inside living organisms. It’s simply an outdated concept.

Also, an insight about brain evolution in this chapter suggests that the structures in our brains are massively redundant as “large parts of our brains work mainly to correct mistakes that other parts make” because “while some structures worked well in earlier times, they now behave in dangerous ways, so we had to evolve corrections for them.” This is one reason Minsky thinks human psychology is so difficult, because for every rule of thought that psychologists define, there are long lists of exceptions, given our evolutionary brain baggage. As soon as I find a good example of a dangerous behavior that evolution has corrected, I'll capture it.

Tuesday, September 27, 2016

The Emotion Machine

Read the first two chapters of The Emotion Machine, Minsky 2006. Chapter 1, “Falling in Love”, explains love in mechanical terms and argues that machines could possess the capacity to fall in love, simply by abandoning their critical faculties and forsaking most of their usual goals: “Love can make us disregard most defects and deficiencies, and make us deal with blemishes as though they were embellishments.” Love is a state in which the usual questions and doubts about someone are suppressed. Minsky describes the emotions we usually associate with love: passion, devotion, allegiance, affection, companionship, connection, as a variety of processes, that once triggered, lead us to think in different ways:
When a person you know has fallen in love, it’s almost as though someone new has emerged--a person who thinks in other ways, with altered goals and priorities. It’s almost as though a switch had been thrown and a different program has started to run. 
Minsky questions our understanding of loaded words such as emotion. We can’t learn much from a dictionary definition of  the word because a definition only hides what is really a “range of states” too complex to comprehend. He explains mood changes, say from angry to happy, as highly complex mental state changes. And mental states, in Minsky's theories of the mind, are based on the use of many small processes.

The idea of an instinct machine is introduced in this chapter. Minsky explains that three things happen inside an instinct machine: it knows how to recognize situations through sensors, it has some knowledge about how to react to them, and it uses muscles or motors to take action. In an instinct machine, sensors activate motors.

In trying to understand an emotion, an old question, what are emotions and thoughts? should be replaced by what processes are involved in an emotion?

Tuesday, September 20, 2016

The Emotion Machine

Started to read The Emotion Machine, by Marvin Minksy, first published in 2006. I remember the first time I read about Minsky. It was in Steve Levy’s Hackers, the chapter about the old days at MIT, during Richard Greenblatt’s sophomore year when he wrote a FORTRAN compiler for the PDP-1:
Someone like Marvin Minksy might happen along and say “Here is a robot arm. I am leaving this robot arm by the machine.” Immediately, nothing in the world is as essential as making the proper interface between the machine and the robot arm, and putting the robot arm under your control, and figuring a way to create a system where the robot arm knows what the hell it is doing. Then you can see your offspring come to life.
- Steven Levy, Hackers: Heroes of the Computer Revolution, 1984

Thursday, January 08, 2015

Options Basics: Calls and Puts

Options are highly versatile financial instruments that open up a wide range of investment strategies for individual investors. Options have been actively traded on the Chicago Board of Options Exchange (CBOE) since 1973. In essence, an option represents a contract between two parties to buy or sell a financial asset. The contract gives the owner the right to buy the asset (call option) or the right to sell the asset (put option) at a predetermined price and within a predetermined time frame. An option gives its owner the right to do something in the future. The owner of an option has the right but is not obligated to exercise the terms of the contract. If the owner of an option does not exercise this right before the predetermined time frame, then the option and the opportunity to exercise it cease to exist, and the option expires.

Buying a call option for a financial asset is like buying a coupon for something. For example, a coupon for a concert ticket can be thought of as a call option for admission to a concert. Since the price of a concert ticket can go up in the weeks and days before the event, a savvy concert goer might want to lock in a ticket price by spending a little money (a premium) on a coupon that lets him pay a fixed price (the strike price) for a ticket anytime before the concert. Three things can happen to the ticket price in an open market before the concert: the price can go up, it can stay the same, or it can go down. When the ticket price goes up, the coupon owner can buy at a discount because the coupon guarantees a lower price. This is a profitable situation for the coupon owner because he can buy at a discount and immediately sell in the open market at a higher price. When the ticket price stays the same, the coupon owner pays full price, loses the small cost of the coupon, but if tickets are running out before the concert, then the coupon gives him time to buy until the night of the concert. When the price of a concert ticket goes down, maybe because the concert is not that great, the coupon owner may not want to buy a ticket after all, even at a lower price. In this case, a coupon owner loses the small cost of the coupon, but by waiting things out, he doesn't end up spending full price for a ticket up front, only to be stuck with admission to a lousy concert.

Buying a put option for a financial asset is like buying insurance for something. Spending a little money on insurance provides price protection for the thing insured. The value of a ticket can be insured by paying a premium. If the concert ticket price goes up or stays the same, then the insurance is not needed and the money spent on price protection is lost. If the ticket price goes down because the concert is not that great, then this form of insurance guarantees that the ticket holder will get a fixed price for the ticket (a refund), even if the market value of a ticket is much lower. When ticket prices go down, the insurance covers the difference between a fixed, higher price and the current, much lower price, and itself becomes more valuable as people are willing to pay more for greater price protection. When prices go down, the insurance can be sold for more than its original cost. For a small premium, insurance protects the value of the concert ticket if you buy one up front, provides time to make a purchase decision at a fixed price if you want to wait and see what happens to ticket prices until the concert, and increases in value when ticket prices decline.

When you sell something, you get to keep the money collected from the sale, provided that everything goes as expected through the end of a transaction. A ticket coupon seller collects the price of a coupon and expects concert ticket prices to either stay the same or to go down by the night of the concert, in order to keep the price of the coupon as his profit, just as a call option seller typically expects the price of a stock to stay the same or to go down by expiration for the same reason: to keep the premium. The seller of insurance for a concert ticket keeps the premium collected when concert ticket prices stay the same or go up, just as a put option seller collects a premium, and keeps it as a profit when stock prices stay the same or go up.

At a fundamental level, options markets behave like the markets for concert ticket coupons and insurance in these examples: buyers take one side, sellers take the other, and both parties speculate on the price of an underlying asset.

Why Trade Options?

In an options trade, the seller of an option takes on an obligation, while the buyer purchases a right. The buyer of a call is bullish, thinking the market will move up, while the seller of a call is bearish, thinking it will go down. Conversely, the buyer of a put is bearish while the seller of a put is bullish. Opposing sentiments about the market are implicit in every options trade as traders believe the market will move in a particular direction.

As individual investors, we trade options to get a better ROC (return on capital), to give ourselves better odds of success, to define our risk, and to combine options into profitable trading strategies. We are also interested in trading options to become active participants in the world of finance. Options trading teaches us the language of finance and helps us to develop a financial mind.

Buy A Call

A trader who thinks that a stock will go up can buy the right to purchase the stock (a call option) at a fixed price, instead of purchasing the stock itself. If the stock price at expiration is above the strike price by more than the premium paid, then he will make a profit. If the stock price is lower than the strike price, then he will let the option expire worthless, and lose only the amount of the premium.

Example

Suppose MSFT is trading at $46. A call option with a strike price of $46 expiring in a month is priced at $1. A trader’s assumption is that MSFT will rise sharply in the coming weeks and so he pays $100 to purchase a single $46 MSFT call option covering 100 shares ($1 x 100 = $100) with 30 days until expiration.

If the trader’s assumption is correct, and the price of MSFT stock goes up to $50 at option expiration, then he can exercise the call option and buy 100 shares of MSFT at $46. By selling the shares immediately in the open market at $50, the total amount he will profit from the exercise is $4 per share. As each option contract gives him the right to buy 100 shares, the total amount he receives when he sells the shares is $4 x 100 = $400. Since he paid $100 to buy the call option, his net profit for the entire trade is $300.00 ($400 - $100).

If the trader’s assumption is incorrect and the price of MSFT drops to $40 at option expiration, then the call option will expire worthless and his total loss is limited to the $100 paid to purchase the option.

Profit and Loss (P&L) at Expiration


If he had purchased 100 shares of MSFT at $46, that is, if he had purchased the stock outright, then his total investment would have been $4,600. With MSFT trading at $50, the investment would generate a $400 profit ($5,000 - $4,600 = $400). This is an 8.6% return on investment. Purchasing a call option, as detailed above, generates a $300 net profit, on a total investment of $100, or a 300% RoR (return on risk).

Buy A Put

A trader who thinks that a stock will go down can buy the right to sell the stock (a put option) at a set price. If the stock price at expiration is below the strike price by more than the premium paid, then he will make a profit. If the stock price at expiration is higher than the strike price, then he can let the option expire worthless, and lose only the amount of the premium paid.

Example

Suppose MSFT is trading at $46. A put option with a strike price of $46 expiring in a month is priced at $1. A trader’s assumption is that MSFT will decrease sharply in the coming weeks and so he pays $100 to purchase a single $46 MSFT put option covering 100 shares ($1 x 100 = $100) expiring in 30 days.

If his assumption is correct, and the price of MSFT stock goes down to $40 at option expiration, then he could exercise the put option and sell 100 shares of MSFT at $46. By selling the shares immediately in the open market at $46, the total amount he will profit from the exercise is $6 per share ($46 - $40). As each option contract gives the right to buy 100 shares, the total amount he would receive when he sells the shares is $6 x 100 = $600. Since he paid $100 to buy the call option, his net profit for the entire trade is $500 ($600 - $100).

If his assumption is incorrect and the price of MSFT rallies to $50, then the call option will expire worthless and the total loss of the trade is limited to the $100 paid to purchase the option.

Profit and Loss (P&L) at Expiration


The risk in a long put strategy is limited to the price paid for the put option no matter how high the stock price trades on the expiration date.

Sell A Call

A trader who thinks that a stock price will decrease can sell a call. When a trader sells a call he collects a premium. If the stock price is below the strike price by the expiration date, then the short call expires worthless and the seller keeps the premium. The premium collected is the seller’s profit. Otherwise, if the difference between the stock price and the strike price is greater than the amount collected in premium, then the seller would lose money. Since there is no limit to how high stocks can go, the losses on a trade like this, in theory, are unlimited.

This is also called a naked call. In practice, to sell naked options, a brokerage firm typically requires a seller to deposit funds (margin requirements) sufficient to cover a 2 standard deviation move in the stock price. This trade results in a net credit to the seller, but requires margin to be maintained in the seller’s account until expiration.

Example

Suppose MSFT is trading at $46. A call option with a strike price of $46 expiring in a month is priced at $1. A trader thinks that MSFT will drop below $46 by expiration, so he sells the MSFT 46 call option expiring in 30 days, and receives a $100 credit ($1 x 100 shares). If he’s right, and MSFT drops and stays below $46 by expiration, then he keeps the $100 credit as a profit.

If he’s wrong, and MSFT rallies (goes up) to $50, then the buyer of the option may exercise his right to buy MSFT at the lower price (the strike price of the option), and the difference would be paid by the seller. In this case, the seller would lose:

(Market Price x 100 shares) - (Strike Price x 100 shares)
($50 x 100) - ($46 x 100)
$5,000 - $4,600 = $400

Since the seller collects $100 when the trade is placed, the the net loss is only $300 ($400 - $100).

Profit and Loss (P&L) at Expiration

Sell A Put

A trader who thinks that a stock will go up can sell a put. When a trader sells a put he collects a premium. If the stock price is above the strike price by the expiration date, then the short put expires worthless and the seller keeps the premium. The premium collected is the seller’s profit. Otherwise, if the stock price is below the strike price by more than the amount collected in premium, then the seller would lose money.

This is also called a naked put. In practice, to sell naked options, a brokerage firm typically requires a seller to deposit funds (margin requirements) sufficient to cover a 2 standard deviation move in the stock price. This trade results in a net credit to the seller, but requires margin to be maintained in the seller’s account until expiration.

Example

Suppose MSFT is trading at $46. A put option with a strike price of $46 expiring in a month is priced at $1. A trader thinks that MSFT will rise above $46 by expiration, so he sells the MSFT 46 put option expiring in 30 days, and receives a $100 credit ($1 x 100 shares). If he’s right, and MSFT stays above $46 by expiration, then he keeps the $100 credit as a profit.

If he’s wrong, and MSFT drops to $40, then the buyer of the option may exercise his right to sell MSFT at the higher price (the strike price of the option), and the difference would be paid by the seller of the option. In this case, the seller would lose:

(Market Price x 100 shares) - (Strike Price x 100 shares)
($40 x 100) - ($46 x 100)
$4,000 - $4,600 = -$600

Since the seller collects $100 when the trade is placed, the net loss is only $500 (-$600 + $100).

Profit and Loss (P&L) at Expiration

Friday, December 19, 2014

Book Structure

Considering “The Impact of 3D Printing” as a possible subject for a book. Also giving consideration to different book structures, division into chapters, and basic metrics per chapter, per paragraph, to support an argument, to make points, provide insights, and to cite sources. I wrote a 10-page article a while back that was divided into four main sections wrapped by an introduction and a conclusion. Simple. Each of the sections barely scratched the surface and I had a single source cited in the introduction. For a book-length project, 200-300 pages, I simply can’t begin without defining an overall structure.

How many sources? How about ten books (or sources) per chapter? A 10-chapter book would then cite 100 books (or sources). If each chapter devotes 5 paragraphs to a book or to a source, that works out to 50 paragraphs per chapter. A five-paragraph treatment of a book is a mini-essay. Let's say a mini-essay is structured as an opening argument paragraph, followed by three supporting paragraphs, and a conclusion paragraph. Since a single page accommodates around 2 paragraphs, and a full chapter would contain 50 paragraphs, every 2 mini-essays would span 2 double-sided pages or 15 pages per chapter. You can then think of the book as a series of 5-paragraph, 2 and a half page mini-essays. The book would contain 10 x 10 (10 mini-essays x 10 chapters), or one-hundred mini-essays. That’s 150 pages. With a 10-page intro and a 10-page conclusion, that’s a 170 page book right there.

So I would need to focus on writing 5-paragraph mini-essays. Once I write 100 of these and connect them, I will have a book length work. How long will it take? I think I can write 2 high quality mini-essays per week. I’m talking really polished here. That’s just 10 paragraphs a week, or, 50 weeks total. That also works out to 4 essays per month, or, a chapter every 2.5 months.

Now, this breakdown is calculated as if I were making steady progress like a well-calibrated little writing machine. But I know that I will write absolutely nothing on some days, not a word, and maybe write 5-times the required daily average on others.

Monday, December 15, 2014

Amazon books came in Monday 12/15! When Genius Failed: The Rise and Fall of Long-Term Capital Management (2001) by Roger Lowenstein,  Myths of Rich and Poor:Why We’re Better Off Than We Think  (1999) by W. Michael Cox and Richard Alm, and The Shallows: What the Internet is Doing to Our Brains (2010) by Nicholas Carr.

Read chapters 1 through 7 of Lowenstein’s Rise and Fall of LTCM. Also read the first five chapters of Nicholas Carr’s The Shallows while note taking in the margins and underlining this and that. Carr hasn’t talked about the brain as much as I thought he would have in the first five chapters. He cites some study about monkeys with severed hand nerves and the corresponding rewiring in the brain at  the synaptic level, but there’s not much science there.

Monday, December 08, 2014

Found several The New Yorker pdf torrents. Printed at work. Also read a few New York Times book reviews from 100 Notable Books of 2014. Started "Book Log" blog entries to hone in on a research topic.

Monday, December 01, 2014

Books

Ordered several books cited by Lanham in The Economics of Attention: Style and Substance in the Age of Information (2006). Also found a chapter from Katherine Hayles How We Think: Digital Media and Contemporary Technogenesis (2012), in pdf. Read carefully and extracted citation styles, that is, how she cites research. Also bought a copy of: The Shallows: What the Internet is Doing to Our Brains (2011) by Nicholas Carr. A trip to Barnes and Noble’s “Science” shelves, opposite “Math”. The Math section is replete with Dummies books.

Saturday, October 27, 2012

Will Home Prices Continue To Rise?

During the past several months, while selling a property and looking for a new one, my perspective on the housing market has changed.

Now deeply involved in the home buying process, I can't help but worry about the current enthusiasm around recent home price increases. This is my attempt to dampen this unfounded optimism with a little caution.

What I see as some basic facts:

Factors that are contributing to current home price increases:
  • Low interest rates
  • FHA lending
  • Low inventory

Factors that could contribute to home price declines:
  • Higher inventory
  • Traditional lending standards
  • Unemployment
  • Salary stagnation
  • Student Loan debt

Basic Questions

What is going to sustain home price increases? Do interest rates stay low or decrease even further in the next 3-5 years? The Fed stated that rates will stay low until late 2014. Low rates are not indefinite and run a good chance of being higher in 2015.

Do FHA loans provide a solid foundation for the full housing recovery everyone wants, and should FHA be the way forward? Not really.

Will low inventory levels stay low? If foreclosure activity picks up again, and lenders begin to clamp down on delinquent borrowers, a much needed flow of distressed property will come onto the market and inventory levels will rise.

From this standpoint, it seems that the factors that are contributing to the recent home price increases are weak, and their effect will not be long lasting. It's an artificial high.

Another important question: are the factors that could contribute to home price declines more than likely to outweigh the recent increases? I think yes.

A higher outflow of distressed property is long overdue.

Lending will eventually return to higher rates and traditional 20% down payments: no more FHA lending subsidy to artificially prop up the market.

Jobs are still scarce, except for the highly skilled. This means less demand for housing on the low end. Companies have been able to hold back on salary increases because of a slow economy, and know full well that employees would rather stay in their job than take chances in the current job market looking for something better. Stagnant wages will widen price-to-income ratios and put downward pressure on home prices over time. And student loan debt will continue to be a huge financial obligation for young, potential first time buyers, which severely constrains the emergence of a move-up market.

What we are seeing now in prices is nothing more than a small rally that is limited to a few, high demand, low inventory areas. This is likely to continue into the Spring of 2013. A large contributing factor to recent price increases is the result of bidding wars for very limited numbers of formerly distressed properties that were acquired through tight connections by investors looking for a quick flip.

The sharp price declines of 2007 should have continued well beyond 2011, but did not because of a government induced slowdown in foreclosure activity early in 2012. Lenders are basically waiting for the outcome of the election next week to know what action they will be able to take next to better deal with large numbers of delinquent borrowers. Lenders hands are tied at the moment, but they want badly to clean this up.

Short sales are providing some relief to lenders who are unable to deal effectively with delinquent borrowers. The few short sales that do hit the market are long overdue. Some lenders give short sellers incentives to move on. But it's difficult to turn a delinquent borrower into a short seller when they have the option to squat indefinitely at no cost. And this behavior is encouraged by tight regulation on lenders to deal with squatters.

The fundamental problem remains unresolved: too many people continue to be way in over their heads. While a short rally in home prices has allowed some people to escape a negative equity situation (like me), this is a much smaller group than the large numbers of people who purchased way too high, refinanced (cashed out), took out a HELOC, signed up for a teaser rate on that second mortgage, and are now delinquent on all of the above. As long as this large lump continues to sit, off the market, we may never see a true recovery.

Another shake out is badly needed. 

Tuesday, January 03, 2012

Giftedness


My daughter was identified as gifted in the 3nd grade and was enrolled into California's Gifted and Talented Education (GATE) program. Some quotes, definitions, and general descriptions of giftedness:
Like a talent, intellectual giftedness is usually believed to be an innate, personal aptitude for intellectual activities that cannot be acquired through personal effort.
Gifted individuals experience the world differently, resulting in certain social and emotional issues. 
Joseph Renzulli's (1978) "three ring" definition of giftedness is one well-researched conceptualization of giftedness. Renzulli’s definition, which defines gifted behaviors rather than gifted individuals, is composed of three components as follows: gifted behavior consists of behaviors that reflect an interaction among three basic clusters of human traits—above average ability, high levels of task commitment, and high levels of creativity. 
Generally, gifted individuals learn more quickly, deeply, and broadly than their peers. 
They may also be physically and emotionally sensitive, perfectionistic, and may frequently question authority.
Many gifted individuals experience various types of heightened awareness and may seem overly sensitive. For example, picking up on the feelings of someone close to them, having extreme sensitivity to their own internal emotions, and taking in external information at a significantly higher rate than those around them. These various kinds of sensitivities often mean that the more gifted an individual is, the more input and awareness they experience, leading to the contradiction of them needing more time to process than others who are not gifted.
Healthy perfectionism refers to having high standards, a desire to achieve, conscientiousness, or high levels of responsibility. It is likely to be a virtue rather than a problem, even if gifted children may have difficulty with healthy perfectionism because they set standards that would be appropriate to their mental age (the level at which they think), but they cannot always meet them because they are bound to a younger body, or the social environment is restrictive. In such cases, outsiders may call some behavior perfectionism, while for the gifted this may be their standard.

Saturday, August 27, 2011

U.S. Virgin Islands


We are headed to St. Thomas, U.S. Virgin Islands next week for a family vacation. A Google image search says more about St. Thomas than I could possibly describe here.

Our flight leaves Los Angeles on Monday night to Orlando FL, and from there to Cyril E. King Airport. We are staying at the Ritz Carlton.

Vacation pictures to follow.

Friday, August 19, 2011

News Consumption Patterns

Every day, Feedspikes processes a massive subscription to 1000+ RSS feeds. The feed subscription process runs continuously and guarantees that only new, unique stories are consumed throughout the day. To quantify this processing in terms of the total, unique news items coming in, or the actual news consumption rates for the site, I ran a few queries against the Feedspikes database. In this analysis, one <item> from an RSS feed is considered a news item.

From the last 4 weeks of activity, the following patterns emerge for monthly, weekly, and hourly news consumption rates:

Monthly



The monthly news stream forms a zig-zag pattern as the news item count increases gradually early in the week, reaches a peak, and then decreases steadily until the weekend. From the monthly pattern it is clear that weekends are relatively quiet for most news organizations Feedspikes subscribes to, and that the highest volume of news is consumed on Wednesday and Thursday. Following this pattern, from 7/18/11 to 8/18/11, a total of 213,775 news items were consumed by the site.  

Weekly



A reverse "U" shape, the weekly news consumption pattern starts with a timid Sunday, followed by an encouraging Monday, right before a climactic mid-week when a flood of news items is released. The news item count then tapers off, gradually, through the end of the week. On the week pictured above, 7/31/11 through 8/6/11, a total of 51,813 news items were consumed.

Daily



A spike emerges from the daily news consumption pattern at around 3 pm CST (4 pm EST). After this spike, the subscription process brings in a sustained news item count throughout the afternoon, evening, and even into the night. Earlier in the day, a minor spike is seen between 8 am and 10 am. This is part of the daily pattern that occurs on nearly all of the days in the 4 week period analysed: a small, but noticeable spike in the number of new stories consumed during these hours. A total of  9,065 news items were consumed on 8/3/11.

Bottom Line:

* Weekends are quiet relative to weekdays

* Most active day of the week is Wednesday

* Most active hour of the day is 4 pm EST

Capturing these patterns helps to set expectations for what is considered "normal" feed consumption rates for the site.

Sunday, April 03, 2011

Problem Domain Crossover

When technology crosses several different problem domains, it remains independent of a single customer's problem. Domain-crossing technology, if we can call it that, has no specific vocabulary and if there are any reusable key concepts to be found, they are so vague that developing a specific problem domain for the technology may be difficult. At this level of generality, special features that are introduced into the technology to focus on any particular problem are simply not going to translate well across domains.

If we were looking for a specific problem domain, then we have to ask what concepts are outside the scope of the domain? Where are the boundaries? In other words, if there is one specific problem domain for the technology, then what concepts should never be modeled in the domain, and why?

These are important questions to ask before introducing a new abstraction or conceptual model into your core code base, if your software can be used across problem domains. Whether it affects your service layer, your data model, or ends up deeply embedded at some other level, new abstractions may severely constrain future application of the technology in different spaces.

I always try to design for the general case in software development projects because one can never anticipate how technology will be applied in the future or where your organization may derive business value from its application elsewhere.

Software developed this way gets deployed as a cohesive solution in a customer environment, where a certain level of generality adds complexity and may become overwhelming to grok. Sometimes it takes a highly skilled technologist who looks way beyond the documentation to put all the pieces together into something that solves a specific set of customer problems.

The end result of this difficult exercise is inevitable: a collection of very loosely integrated pieces talking to one another, none of which individually point to a particular customer problem, but that once integrated and configured in production as something more specific, delivers what is needed.

The Ctrl tries to fit into this way of thinking about software. The language provides abstractions only to the extent that the timestamp of an event can be manipulated by statements in the language to lookup related events and compare event properties. That's pretty much it. Other language features supported are general programming constructs: if statements, variables, and for loops.

With these basic features, the Ctrl can be used to solve a myriad of complex problems involving the detection of patterns in event streams. For those cases in which a customer problem cannot be solved with basic features alone, a call-out mechanism is provided by the rules engine, to invoke external code. That external code remains isolated in the environment of the customer it is developed for, and never becomes part of the core.

In the next few iterations of the language, as we get some experience with customers, my goal is to add more CEP features without doing anything that may couple the language to any particular customer problem, since the language and the interpreter are part of the core.

This can be a struggle. Pressure to take shortcuts or to do something specialized for that special customer is difficult to ward off. But pushing back is a good thing to do because otherwise a heavy price is always left to be paid somewhere down the road when your generalized solution to many different problems is contaminated by special one-offs.

When shortcuts are taken, or something overly specialized makes its way into the core, future development on that core becomes more and more costly to do. This is a vicious cycle in software development that you always want to avoid getting trapped into, as explained by the debt metaphor.

Thursday, February 24, 2011

Document Management

A while back, the small team that started this project for IBM began to document the software in user guides and install guides, capturing what they thought was essential in just a handful of Word documents. These documents still exist, and are neatly split into small chapters.

The software grew. Over time, these individual documents have expanded, and new features have caused the total document count to increase. To cover the basic software features, the end user documentation is now dense subject matter spanning a sizeable document library that contains hundreds of pages of text and graphics. The software is, after all, a full blown video analytics suite, and documenting its features in detail is no small endeavour.

With each software release, the amount of content increases proportionally with new features. In fact, the last time I did a page count across the entire library, altogether it came to about half the number of pages in War and Peace.

Even a well planned process to manage that many pages of software documentation in Word will not scale for the team that writes it or for the customers who consume it.

We have reached a point where we need to change the way we manage our end user documentation and I've been asked to look into Daisy for this purpose.

At the moment, I'm cooking up a plan to consolidate all end user documentation into a Daisy repository and to host a Daisy wiki as a front end for the subject matter experts on our team. We want to cross-reference, index, and be able to manage translations of everything. We also want to support several publishing formats including xhtml and pdf and to generate document aggregates. Daisy seems to fit nicely into these objectives.

Tuesday, February 08, 2011

Recognizing Variables


Nearly every programming language that supports variables, uses them to name a thing in one place and refer to it someplace else. The notion of variables in computing comes from this need to refer to things repeatedly. The Content Type Rule Language (Ctrl) is no different from other programming languages in this respect.

The Ctrl focuses on a problem domain called event processing, in contrast to general purpose programming languages whose scope is broader. The Ctrl's power in its problem domain is increased by variables: a handful of variable types and some basic ways to act upon them play an important role in solving complex event processing problems.

Language constructs in programming languages usually contain implied instructions for an interpreter. Even the smallest of language constructs imply some instructions for an interpreter to carry out. When the Ctrl parser recognizes a variable declaration, for example, it builds little trees to encode these instructions. In essence, the parser's job is to break down the language into condensed trees that encode the instructions to be carried out by an interpreter or a translator.

An integer variable declaration in the Ctrl should look familiar:

int myInt = 10;

A declaration like this one contains three implied instructions:
  1. name something ( myInt ) 
  2. define a type for the thing named ( int )
  3. assign a value to it ( 10 )

Ideally, a compact tree representation of a variable declaration encodes instructions in a way that tells a language interpreter (or translator) exactly what to do. For variable types supported by the Ctrl, the sub-trees built by the parser have the following structures:

String Variable

var myString = 'str';



The VARDEF root node in this tiny sub-tree instructs the interpreter to define a variable using the two immediate child nodes. The left child bears the variable's name, myString, and the right child instructs the interpreter to assign it a LITERAL string value of 'str'.

Integer Variable

int myInt = 10;



A single root node and two-children is sufficient to tell the interpreter to create an integer and assign a specific value to it.

Time Roll Variable

var myDate = this.time - 20s;



We looked briefly at this type of expression in a previous post, and here we show that the result of evaluating a time roll expression may also be assigned to a variable to hold onto a date/time value calculated from the date/time of an incoming event.

Event Array Variable

var myEvents = lookup.events.type("A");



Since lookups resolve to an array of event, declaring a variable from the result of evaluating a lookup expression is a good fit for the Ctrl's problem space because it gives Ctrl scripts a reusable handle for event lookup results. It allows scripts to process, examine, scan, manipulate, act on, and react to an event lookup result. The tree representation of such a variable declaration includes a root VARDEF instruction, a single left child node that bears the variable's name, and the lookup criteria is encoded succinctly in a child sub-tree to the right.

Event Array Variable (time constrained)

var myEvents = (this.time - 20s <= lookup.events.type("A"));



The tree above combines the two previous trees into a single, larger one. This tree instructs the interpreter to perform two individual but related sub-tasks. First, calculate a date/time value from the TIMEROLL sub-tree, and second, lookup events using the result of the date/time value calculation. The intent of the tree is to encode these individual tasks, or instructions, combine them, and to assign the lookup result, an array of event, to a variable called myEvents.

Tuesday, February 01, 2011

Language-oriented programming

From Wikipedia:
Language oriented programming (LOP) is a style of computer programming in which, rather than solving problems in general-purpose programming languages, the programmer creates one or more domain-specific languages for the problem first, and solves the problem in those languages. 

LOP is the programming paradigm I took on last year for the Content Type Rule Language (Ctrl). The Ctrl has several LOP characteristics:

  • It is formally specified, that is, it has a grammar defined in EBNF
  • It is domain-oriented, focused on real-time event stream analytics. 
  • It is high-level: provides abstractions, compiles dynamically, and is interpreted.  

While the initial effort in developing a DSL is quite large--specifying a grammar and writing an interpreter for the language took a considerable amount of time and wasn't easy--we now have a language in which to solve a wide range of event analytics problems.

The immediate pay off is a real change in the way I work. Now I think of new requirements almost entirely in terms of the DSL I designed. If the language needs a new construct or feature to solve a problem in a general way, I can extend it as needed. Language enhancements are easy. The Ctrl is so compact that a large amount of work can be accomplished in just a few lines of code. This keeps the Ctrl code base small even as the complexity of tasks accomplished by code written in the language, increases. LOP is powerful stuff.

Friday, January 28, 2011

Traffic Flow Optimization

New research conducted by scientists at the Santa Fe Institute suggests that vehicle traffic flow through city street intersections improves simply by coordinating neighboring traffic lights. 

In a working paper published late last year, scientists found evidence of large-scale, city-wide traffic flow increases given dynamic adjustments at the local level, between neighboring lights. The method proposed by the research shows an overall reduction in mean travel times throughout a city. The research indicates that when traffic lights are allowed to self-organize, an overall system smoothness emerges.

One of the sample scenarios we use in our documentation for the rules engine is based on this research, which I think is fascinating. Our scenario combines multiple devices, cameras and traffic lights, and shows how they can be integrated using rules-based logic. I wanted to see if we could implement the method described in the paper using our little rules engine and the Ctrl (Content Type Rule Language). As it turns out, we can. 

Background

Traffic lights at city street intersections are usually controlled by timers that are scheduled to change light phases based on typical traffic conditions for the time of day. In many cases, traffic lights are timed to change more frequently during rush hours to minimize congestion and to allow traffic to flow in all directions at short intervals. 

Timed traffic lights provide for optimal flow as long as the traffic conditions the timers are based on, occur as expected. When this happens, drivers don't need to wait very long at a red light and if they're lucky, may ride out a wave of green lights. 

The Santa Fe Institute research shows that there are situations in which the simple clockwork mechanism that controls traffic lights does not always provide optimal flow. When there is an accident, for example, or when less traffic than what is expected comes through, then the timers are unable to adjust to changing conditions. The variability in the number of cars and individual driver speeds during the day makes it nearly impossible to optimize traffic flow using fixed timers. 

The Traffic Gap Scenario

The example we use in our documentation is focused on the detection of a traffic gap. We show that rules-based logic is triggered by events generated from video analytics that detect instances of traffic gaps. The emergence of a traffic gap behind a platoon of cars is a perfect opportunity to fine tune the lights. The basic principle underlying this kind of traffic optimization is simple: if there's no traffic, and a light is green, then change it and allow traffic to flow in another direction. 

The goal is to decide when to the switch lights and to self-adjust based on current traffic conditions. These conditions can be setup and detected by if-then logic defined in a rule. 

The Details

The scenario involves two street intersections:



A. First Street and Main Street

B. Second Street and Main Street

Traffic moves northbound on Main through First Street and Second Street. Traffic on First St. and Second St. moves East to West. This example requires one camera (Camera A) to be mounted on the traffic light that regulates traffic moving north on Main and First Street, as well as a second camera (Camera B) to be mounted on the traffic light that regulates traffic moving north on Main and Second Street. The video analytics for Camera A and B are setup to send an event to our rules engine when Directional Motion is detected as a car drives north through intersections A and B.



Since traffic gap detection is based on the absence of a Directional Motion event for a period of time through intersection A, the rule in this scenario says: "IF there is Directional Motion north through intersection B, AND there has not been Directional Motion north through intersection A for 20 seconds, THEN switch the traffic light and allow traffic to flow in another direction."

In the Ctrl (Content Type Rule Language) the conditions and actions for this rule are:
if ( (this.type == 'Motion North') && (this.camera == 'B') ) {
  var events = 
    (this.time - 20s <= 
      lookup.events.type('Motion North').camera('A').time) 
  if (events.size <= 1) {
    run('TrafficSignalControl.sh', 'ChangePhase:Red', 'A')
    run('TrafficSignalControl.sh', 'ChangePhase:Red', 'B')
  }
}

The condition in the statement above calculates a timestamp value from the incoming Directional Motion event through intersection B by subtracting 20 seconds from it:

this.time - 20s

The calculated timestamp value is used to lookup Directional Motion events whose timestamp is greater or equal to the calculated value, that is, 20 seconds earlier relative to the incoming event. 

In this case, when the conditions in the rule are met, the run action executes twice, and invokes a hypothetical TrafficSignalControl.sh shell script which is a wrapper for a program that interfaces with the traffic lights at intersections A and B. The shell script takes a ChangePhase:Red command as an argument, as well as the name of the intersection whose light to change. 



Such a rule action is entirely for illustration purposes only, and assumes that traffic lights in a city expose some kind of API that can be used to send a light phase change request from the rules engine as a result of triggering a rule. In the sample rule above, the TrafficSignalControl.sh script would be invoked once to suggest a light phase change at First St. and Main North, and a second time to suggest a light phase change at Second St. and Main North.

This example, while simple and somewhat trivial, is mainly intended to be extensible. The example shows that the rules engine, used as an extension to a traffic control system, can make smart decisions automatically. The rule logic detects a traffic anomaly and identifies exceptional conditions in a way that can trigger optimizations in traffic flow in a busy environment. The traffic gap phase change scheme is also extensible and will work for any number of intersections, at a larger, smarter city level.