Streams, Firehoses, and Buckets: Understanding AWS Data Analytics

AWS Data Analytics can mean different things to different organizations and can encompass a wide range of applications. In general, data analytics involves the use of statistical methods to describe data and extract trends.

Data Collection

The collection phase of AWS data analytics gathers raw data from a source and stores it in a database or other storage resource. Data collection resources are based on a publication and subscription model, where producers publish data to the resource and consumers subscribe to collect data. In AWS, two common methods of collecting data are Kinesis and SQS. 


AWS Kinesis can be broken into Data Streams and Firehoses:

Kinesis Data Streams are real-time solutions. They can gather data from CloudWatch Logs, Kinesis Analytics, Kinesis SDK, Kinesis Agents installed on EC2 instances or on-prem instances, or other third party libraries. The data is broken down into immutable data blobs attached to a Record Key and Sequence Number. The Record key is used to determine which shard of the Data Stream the data blob will be sent through.

Data Streams are made up of one or more shards, each shard has a limited throughput of 1Mb/s for producers and 2 Mb/s for consumers. The number of shards to be provisioned can be changed to meet throughput requirements. The number of shards in a Data Stream can be increased or decreased via an API call to merge or split shards. 

Kinesis Data Firehoses on the other hand are not quite real time. Firehose uses a buffer with a size limit and time limit. The buffer sizes and time limit range from 1-128 MB and 60-900 seconds respectively. Data is written to the buffer and when the time limit or size limit is met, the data in the buffer is transmitted all at once and can be picked up by consumers.

Kinesis Firehose uses similar producers as Data Streams but only has Redshift, S3, Elasticsearch, and Splunk as available consumers. However, unlike Streams, the data in Firehose is mutable and integrates with Lambda. The Lambda functions can be used to modify the data in transit or use an SDK to integrate with other AWS data analytics resources.


While Kinesis is used to stream data on a more consistent basis, another method of data collection worth considering is a Simple Queue Service (SQS). SQS is a more traditional publication and subscription service where data is sent to the queue as a topic and message. Producers send data with topics to SQS, consumers who have subscribed to certain topics then process the messages and remove them from the queue.

Consumers of SQS delete messages from the queue after being processed whereas multiple consumers can subscribe to Kinesis.

Another limitation for SQS is that messages only support strings of maximum 256 kb, a way around this is to send data to a S3 bucket and send the metadata as a message that way the consumer can access the data in the S3 bucket.


A common task for data analytics is to Extract Transform Load (ETL). The collection phase extracts raw data from a source and sends it into a cloud architecture. The processing phase transforms the raw data into structured data that is more usable for analytics and loads it into storage or sends it to another resource.

Lambda functions are serverless functions that get triggered by events. Lambda functions store scripts written in various languages, when the event is triggered the script gets executed.

For AWS data analytics Lambda functions can be used for real time processing or transformation. Events from Kinesis Firehose, SQS, or S3 can trigger a Lambda function to take the data and use the script to perform transformations. Lambda functions, being serverless, are meant for short, stateless processes. Lambda functions time out with a maximum 15 minutes.

To perform complex or long running transformations it may be necessary to split the script into multiple smaller well defined steps to process the data. These smaller steps can be organized into Step Functions which manage a sequence of Lambda functions, called a workflow. Step functions allow for Lambda functions to be configured in workflows which take the outputs of the previous functions to be used as inputs in the next step.

By setting up Lambda functions as Step Functions error handling, ordering, and states are managed, allowing for more complex processing jobs to be completed serverless.

The Glue service provides multiple solutions for AWS data analytics. A crawler can discover schemas from unstructured data stored in S3 buckets. Crawlers will look at data in S3 buckets and auto discover schemas, once a schema is discovered tables are created in the database section and the schemas can be registered in the Schema Registry. Schemas can be stored in the data catalog for EMR or Athen to query S3 buckets with SQL code.

Studio can be used for ETL jobs can be created using a low-code console or by uploading python or spark scripts to perform the processing. For Glue ETL jobs, a source is identified from; S3, a Relational DB, Redshift, Kinesis, or Kafka. An extensive suite of predefined transformations to map, join, drop, filter, ect can then be used on the source data. The transformed data can then be loaded into S3 or a Glue Catalog database.

AWS Data Analytics


Once the data has gone through processing and ETL, insights can then be extracted from the processed data. In the analytics phase the data descriptions and trends can be inferred. Identifying maximums, minimums, variations, outliers, cyclical, or upwards/downwards trends can be used to better understand the data.

For real-time analytics Kinesis AWS Data Analytics easily integrates with the other streaming Kinesis services. As data is passed through Kinesis Analytics it can auto-detect  schemas for the data and allow for analytics to be applied via SQL queries. The output of the Kinesis Analytics can then be streamed back into a Data Stream or Firehose.

Athena uses the data catalog generated by Glue to allow unstructured data to be searched using SQL queries. Querying S3 directly removes the need for extra steps to load the data into a database before querying. Athena provides a powerful tool to perform ad-hoc analysis on unstructured data without managing servers.

Athena works best when the data is in a common format in an S3 bucket, for other examples where various data needs to be pulled from multiple sources, another service such as Redshift would be better suited.


Quicksight is an end user focused tool to build dashboards and visualize data. Quicksight is a Business Intelligence (BI) tool that draws on many different sources and types of data to explore. As a service meant for end users, Quicksight provides an in-memory engine to create graphics allowing users to quickly explore their data.

This overview is not an exhaustive list of services that can be utilized to build a data analytics application but they are ubiquitous and would probably appear in most data analytics applications. With many of these services it may be hard to decide which one to pick with the cost and benefits associated with each.

Instead of compromising on requirements, AWS recommends decoupling the requirements and using the appropriate services to meet requirements. For example, if data needs to be streamed realtime to identify outliers or spikes and also needs to be processed to be stored, instead of using a Data Stream or Firehose, use both.

Kinesis Data Stream and Kinesis Analytics can be used to provide the real-time streaming and analytics to identify outliers and spikes. A Kinesis Firehose can be used in parallel to consume the same data and stream to an S3 bucket, then Glue can be used to perform ETL to process the data.


Learn more about AWS

Performance Testing Is Not Just a Stopwatch (Stress vs Load Testing)

History of Stress and Load Testing

My first exposure to performance testing was some time ago working on a government contract developing a Command-and-Control System. This was both hardware and software, and testing included “Preliminary” testing for message transmission.

Don’t laugh, but we actually used a stopwatch to time the transmission. The military was more interested not in the computer times that we could have extracted from logs, but the time the messages would actually display on the screens.

Mainframe computers were developed in the 50’s and 60’s. It was not until the 60’s that ways of capturing data, internal to the computer was developed. In 1966 System Management Facilities was developed to collect data as part of the OS/360 release. In the 1970’s, further developments led to the release of Resource Management Facility (RMF) which was part of the MVS and provided real time monitoring. in the 1980s, many other performance types of tools then were produced for the mainframe computers and software testing tools. 

Mercury Interactive was founded in 1989 and shortly thereafter they released the first version of LoadRunner. Prior to this release there were still ways of measuring software performance, but the collection of data was manual and tedious. Logs that contained the data were exported and manipulation of the data was done to provide understandable results. Now the tools provide those results without much manipulation.

Performance Testing Terminology


Testing an application using a predefined amount of processing to determine how that application responds. This usually ensures that the application will function properly under normal conditions.


Tests using above normal number of processing/requests to find out how an application responds to the increased workload. This will at times “stress” the application or the hardware.


This test is executed over a long period of time to validate the application can maintain performance over time.


A test where a very high amount of processing is done at one time. This can be many users logging onto an application at once, or other such processing.


A large amount of data is used to perform volume testing. This usually consists of increasing the database size or the amounts of transactions for the database.


Testing an application/system to determine if network, memory or CPU affects the performance. It is important to note that the scale of the increments is manageable to accurately determine when the changes occur.

Are Stress and Load Testing the same?

These two terms are probably the most misused terms in performance testing. People interchange the two quite often. The definitions have been previously given, and by that the two types of tests are not the same. Most people just use the terms interchangeably because it is easier to remember than performance testing. The actions and goals of the two types are different. 

stress vs load testing infographic


Performance testing really needs good requirements so that the results can be of any use. 

Like all software requirements they should be detailed and explicit. Another key item for performance requirements is that it should be quantifiable.

Here is an example of a requirement to display the home screen:

That requirement as written is totally subjective. It does not have a quantifiable amount. Lastly it is not detailed.


Without well written requirements a performance test would not be able to meet the goals of measuring the attributes of the system like responsiveness, speed, and stability. 

A revised example of the requirement above might be:

This requirement now is detailed and quantifiable, without a possible subjective interpretation.


Performance testing is more than timing an application or just a part of the application. A full understanding of the types of performance testing would guide the effort. It also involves the systems that are being used for the testing, the requirements that are being verified and the tools that are used during the testing.

So put that stopwatch away and get your shovel out and dig for more knowledge about performance testing.


Check out some of our other Tech Talks!

How To Build a More Secure Web App

Embracing Friction in UX

Introducing React Native

Building a Serverless Web App with AWS

Going Serverless with AWS

The first lesson of going serverless with AWS is that you’re not truly going serverless in the literal sense. You’re resigning from the overhead and responsibility of provisioning, maintaining and administering servers and handing the responsibility for the AWS. It almost sounds too good to be true, right? 

How Can I Move to a Serverless Architecture?

AWS offers a robust collection of products that can be used to replace the functions that servers are currently carrying out in your applications. A handful of these services will be detailed in this blog but there are plenty more to explore. ParkMyCloud provides a convenient list of services categorized by use case.

So what exactly is AWS Serverless? Using AWS Serverless means to build a serverless application using AWS’s fully-fledged army of services. This concept can seem a bit abstract at first. To provide a tangible example, let’s say that you’re thinking about building a serverless web application. The architecture below provides an example of how this can be accomplished using AWS services. AWS provides a good tutorial here that serves as a guide to creating your first AWS Serverless web application with this architecture.



AWS Serverless Services

The components shown in the example above are described in more detail below. Keep in mind that this architecture is only one example of how AWS services can be combined to build a serverless application.

Simple Storage Service(S3)

S3 provides a place to store data hosted by AWS. To get started, create a bucket which is a structure that serves as a container for related resources. Then upload your data to the bucket and choose a region for Amazon to store your data in. You can conveniently choose a region located near your end-users to reduce latency, if applicable. 

Amazon Cognito

Amazon Cognito handles the user registration process and uses the user pool paradigm to manage user data and workflow. A user pool is a user directory that contains all user information. Amazon Cognito handles all processes related to users, including the implementation of a registration flow, sending registration emails, provisioning JWTs and more. Add two-factor authentication or SAML with the click of a button. These processes are fully customizable in the AWS Cognito interface. 

Amazon API Gateway

Amazon API Gateway allows you to quickly create and publish APIs. API management and maintenance become simple since AWS takes care of most of the dirty work. APIs created with AWS Gateway can effortlessly process hundreds of thousands of concurrent API calls providing excellent reliability and capacity. 

AWS Lambda

AWS Lambda allows you to write code that lives on servers managed by AWS. Lambda functions can be automatically triggered by other AWS services or can be called from a web or mobile app. For example, you could create an API using Amazon API Gateway which then triggers a Lambda function to execute a post action on a database. The use cases for AWS Lambda are endless and extend far beyond the scope of the web application architecture outlined above. Check out the Youtube playlist found here to see how big companies are using AWS Lambda.

Simplifying Development with AWS SAM

While AWS provides great interfaces for the implementation of each of their services, it’s much more convenient to be able to set services up and running from the command line. AWS SAM makes this possible. Define your serverless application by using a template that specifies all functions, API’s permissions, configurations, and events. Then use the AWS SAM CLI to package and deploy your application. The CLI also allows you to deploy and debug Lambda functions locally rather than having to use the AWS web interface for manipulation.


Is Going Serverless Worth It?

Quick! Let’s start transitioning all of our web applications to be serverless. Not so fast. Developing serverless applications will require shifts in development flow and time invested in gaining deeper understandings of AWS services. It may be overwhelming to take on the development of an entire serverless application at once. One of the positives of using AWS services is that they can be implemented independently. Transitioning just one API or implementing Amazon Cognitio could be a great first step to exploring what AWS Serverless has to offer.

Almost all AWS services use an elastic cost model meaning that you only pay for what you use,  similar to how you might pay for your water or electric utilities at home. Services scale in size to account for varying resource utilization over time. This model could prove to be very beneficial for applications or jobs that only use resources a few times a month. Rather than paying a monthly fee for a server, you pay for what you use, when you use it. Read more about the AWS service pricing model here.

The Bottom Line

The opportunities that AWS Serverless provides are endless and may alleviate a lot of the overhead that can result from provisioning, managing and maintaining servers. Since every application is different, it comes down to whether the elastic pricing and time saved by letting go of server management can be beneficial in your situation. 

Embracing Friction in UX

What is Friction in Design?

Originating from classical mechanics, friction is defined as a force that resists the relative motion of two touching objects sliding against each other. The concept of friction can be lent to the realm of usability design where the user and the interface are two entities moving against one another, creating an abrasive opposition as the user travels through their intended task flow. The more friction present, the more effort the user must input into the system in order to complete the end goal.

Friction Chart

Image via

Friction in UX is generally defined as anything that prevents or slows down the user from completing their intended task.

Friction typically varies in nature. For a user, it can be exiting out of a newsletter signup or an advertisement before being able to read a news article, it can be unclear language on a homepage, or it can be having to create an account in order to apply for a job position.

User interface friction can manifest in a number of ways but is widely regarded as a substandard practice in UX Design school of thought. Steve Krug’s book ‘Don’t Make Me Think’ (an integral piece of literature for every UX designer), covers principles of good usability design in human-computer interaction. Krug’s sentiment, aligned with many other usability design pioneers, is that good UX should require the absolute minimum amount of cognitive effort and steps necessary to complete a task.

From a designer’s perspective, minimizing friction means establishing a clear information hierarchy, applying common UX standards, reducing visual load, and constructing a stable intrinsic system logic to help users establish usage habits.

Image via

In most cases, the concept of avoiding friction should be universally applied to create seamless digital user experiences. Sometimes, especially when designing highly critical systems, a frictionless experience may not always hold the best interest of the user.

A lack of friction seems appealing at first, but when it begins to compromise other usability constraints such as user security and safety, it’s advantageous to realize that friction can have an intended function in usability design.

In this article, I will demonstrate a handful of scenarios where friction can be employed in usability design for the ultimate benefit of the user, the product itself, and those that it effects.



Using Protective Friction for Error Prevention

Here is an Amazon Dash Button. Simply press the button and an order is made instantaneously on your Amazon account for the respective product. This purely frictionless

Amazon Dash Button

Image via

experience is borderline magical, right? There you are tending to your weekly load of laundry and you realize you’re out of Tide Pods, you hit the button and instantly, the words “Tide Pods”’ will never see the face of your grocery shopping list again!

What if every interaction was this simple?

Maybe in the interest of laundry affairs, a lack of friction is appropriate. But what if permanently wiping your computer’s hard drive or sending out a false ballistic missile threat alert to 1.2 million people in Hawaii was just as easy as pressing a button to order Tide Pods?

It is clear that some mistakes are not as easily forgiven.

Emergency Alert Notification

Image via

Now you’re catching my drift. Not every interaction is and should be created equally…

If more tiers of friction were implemented in the emergency alert interface, the admin user wouldn’t have mistakenly alerted 1.2 million people that a ballistic missile was currently heading their way and widespread existential panic could have been avoided.

It is important to recognize digital interfaces as tools that can impact our physical world and understand that we should take appropriate precautions to maintain safety when using them.


Enhancing Security

As the internet continues to become more globally adopted, it also grows increasingly more dangerous. Behind every corner of the internet, someone or something is lurking, waiting to access your personal data.

2 Step Verification

Image via

These days, creating an account is more complex than ever, you have to provide an email, a phone number, and answers to personal security questions just to complete onboarding.

Although tedious, introducing multi-faceted security layers is not there just to annoy you as a user. In the long-run, the overhead friction will prevent the chances of a loss of security down the road.

When you leave your car parked on the street you lock the doors to prevent any possible theft. Why wouldn’t you respect your digital possessions the same and lock your figurative online doors as well? Yes, It takes a few extra steps, but it’s integral to protecting your computerized belongings.

Deceptive Friction

Make Users Feel Good

Have you heard the term “labor leads to love”?

In 2011, business researchers from Yale, Harvard, and Duke conducted a study where individuals reported feeling more in-control, competent, accomplished, and placed a disproportionately high value on products they partially created. They named this cognitive bias the IKEA effect. In order for the IKEA effect to take place, the reward must be superior to the amount of effort required.

IKEA Effect

This same philosophy can be applied to digital interfaces. For example, Instagram offers a platform for expression, but no content itself–it’s up to you to do that part on your own. The more creative effort the user puts into constructing their digital persona, the more valued it becomes by that user as a result.

If Instagram provided the content itself, there would be no allure in the diversity of expression generated by the user-base. There’s no fool-proof formula for establishing a noteworthy Instagram page, and that’s all of the enchantment!

Boost Credibility

As machines become increasingly more powerful, computations that used to take days at a time are now performed in seconds. The rate at which our gadgets become speedier does not exactly parallel the level of faith humans have in them. This becomes evident when we take a look at a case study where our products became so fast that we still didn’t always believe when something important occurred instantly. And thus, artificial waiting was born.

Wells Fargo Eye Scanning

Image via

In 2016, Wells Fargo introduced biometric authentication via eye scanning–users could now sign into their Wells Fargo app with a nimble eye scan. This innovative feature greatly streamlined the login process since users no longer had to manually input a username and password to be authenticated.

The assumption that a streamlined login process would naturally become adopted ignored a key insight–users must trust the action in order to embrace it.

Instead of receiving feedback that mobile app users were delighted by the new, frictionless process, Wells Fargo instead discovered that customers felt the validation was far too hurried to possibly be accurate. Given the sensitivity of the app context, users expressed apprehension and an unwillingness to use an “unreliable” login gateway to access their finances.

Despite this feedback, there was no fundamental reason to actually slow the eye-scanning computation. Instead, the solution was to introduce artificial waiting (also named the labor illusion by Harvard researchers) to the interface to build trust and communicate to the user that the system is computing properly. This way, users can use the simplified login process without feeling like it would allow any user with eyes to access their account.

illusion by Harvard researchers) to the interface to build trust and communicate to the user that the system is computing properly. This way, users can use the simplified login process without feeling like it would allow any user with eyes to access their account.

This case study illustrates that facilitating user adoption entails more than just the existence of cutting-edge technology. It is imperative that the interface communicates to the user generally what is taking place behind the scenes in a way that’s straightforward for the user to interpret.

User-Improving Friction

Alter Behavior

When strategically architected, friction can guide users in an intended direction. Nudging is a theory derived from behavioral science that purposes indirect suggestions and positive reinforcement as methods to influence the decision making of an individual.

For example, an office building that reduces how frequently the main elevator returns to the lobby from every 45 seconds to every 90 seconds will ‘nudge’ a fraction of the elevator-users to take the stairs instead.

Nudging essentially adds small tricks to change behavior (for the better) without limiting the options available. We can observe the same concept in user interfaces.

Slack is a professional instant-messaging platform that facilitates communication for remote teams. Given that it is used in a professional setting, it is likely to assume that a

Slack Notifications

Image via

percentage of its users allow for push notifications due to high priority. Understanding this, the system alerts a user attempting to send a message to group members in alternate time zones with a message double-checking that they want to confirm this action. This added layer of friction doesn’t eliminate the option to continue sending the message, but it causes the user to deliberate if this is in fact what they are intending to do. Potentially disturbing a team member in another time zone might not be at the forefront of the sender’s mind, so an extra step to communicate this consequence instills responsibility when using the product.


Build User Skills

Mario Kart Friction

Image Via

The epitome of building user skills through friction is video games. Essentially the entire premise of video games is to gradually increase friction at a relative rate as the game player improves in skill. If the game has too much friction, the user can’t advance and is discouraged. If the game doesn’t have enough friction, it is unrewarding and uninteresting.

Another setting we can observe this is on is educational platforms. When learning a subject, the student will begin on easier assignments and advance onto harder ones as they travel onward through the course. Circling back to an earlier section referencing making users feel good about themselves, friction in this context adds a sense of achievement.

Increase Product Value

Snapchat UX

Image via

A high level of friction can also induce exclusivity in some user interfaces. Snapchat, for example, doesn’t follow traditional UX guidelines and patterns. Many have ridiculed Snapchat for being unintuitive and not exercising better navigational standards. In response, Snapchat’s CEO, Evan Spiegel has said

“It’s simply anti-adult… This is by design. We’ve made it very hard for parents to embarrass their children.”

Unassociating with other social media platforms like Facebook that have been widely adopted across all age groups, Snapchat’s manifesto is to maintain the age-exclusivity of their product by, you guessed it… friction!

Juxtaposing Snapchat, Product Hunt applies friction to maintain the caliber of its platform. Product Hunt operates on the basis that its users make superior tech product recommendations. If the community allowed any and everybody to become a contributor, the quality of suggested products would be diluted and overcrowded. To be a contributor, you can’t just sign up with an email, you have to jump through a few hoops first to prove yourself worthy.

Although it can be hard to admit, sometimes not inviting everybody to the party can make the party better for those that are already there. Friction as a filter for those who are the intended users of the product preserves a level of integrity that isn’t mutually offered when it’s all-inclusive.

Moving Forward in Design

Sometimes it is difficult to grasp the fact that digital systems can have tangible impacts on our physical lives. Designers have a large responsibility to uphold standards of security, safety, and ease of use throughout usability design but it is important to remember none of which should be exploited at the compromise of another. Generally, unwanted friction should always be eliminated, but it is important to remember that not all types of friction are insidious. Whether it is about slightly nudging users, maintaining exclusivity, slowing down risky actions, teaching responsibility, or boosting credibility, don’t be reluctant to leverage some friction if it will evolve the user experience and the context demands it.

Choosing Between Native and Hybrid Apps


Native vs Hybrid apps are two different ways of writing your mobile application. Woodridge may use either depending on the type of your application, your risk tolerance of depending on a framework, and how much native functionality you need.

Native apps use frameworks and tools that are local to the platform. Going with a native-first approach would mean writing two separate apps, one for Android and another for iOS. On Android, you would write the app in Java or Kotlin. All of the layout would be done in the Layout Editor in Android Studio. On iOS, everything is done in XCode, with the code written in Objective-C or Swift and the layout done in XCode Storyboards.

Hybrids apps use a variety of techniques that allow you to write your application with a single codebase. That code is then compiled to the respective native platform. The end results of a hybrid compilation process are projects that can be used in Android Studio or XCode. The downside is that you cannot make changes in these projects without the results being overwritten the next time you compile.

In general, the more native features you plan on using (camera, GPS, Bluetooth), the more it makes sense to use a native app since these features are platform specific. The sensor APIs are very different between iOS and Android, which means that code reuse that would not be possible.

Hybrid vs native apps


How Hybrid Apps Work

Hybrid apps work by embedding a web view (typically a WebView in Android or a WKWebView in iOS) and then sending information back and forth between a Javascript bridge. Any information that you want to send between the web view and the native side is done through message passing. This adds a layer of complexity and a mismatch between the objects inside and outside the webview, which can only indirectly communicate.

Look and Feel

If you use a hybrid app, your app will look the same on all devices, but it will not match the feel of the other apps. For example, a list view will not look like a default list in iOS, and you will not get the native features like pull to refresh or sticky headers by default. The native components, which are performant and have already been proven from a usability perspective, will not be able to be used inside of a web view. This includes the native maps component, scrolling list views, and camera previews.

native vs hybrid date selector


Certain types of information work better in a web view than in a native container. For example, anything document or text related is shown better in a web view. The web has always been a medium for displaying documents and only recently has grown into a format for web applications with the evolution of Javascript and HTML5.

Hybrid apps used to be slower, but the web views on both Android and iOS have improved and also techniques such as CSS-driven animations and transitions can perform reasonably well on newer phones. On older devices, native transitions would remain the best choice.

Cost-Benefit Analysis

When choosing a native vs a hybrid approach, you also have to consider the cost of maintaining two separate apps versus maintaining a single hybrid app. The more native functionality you need, the more overhead that will be needed to get the hybrid app to work properly.

The development environments of XCode and Android Studio are great for rapid application development. Since the mobile world has converged on two operating systems (iOS and Android), there are only two platforms you need to support by going native, versus the 4-5 in the past. This means that by going native there is much less to support than if there were more platforms.

Frameworks and Techniques for Hybrid Apps


Cordova is a popular choice for writing hybrid apps. Given some Javascript and HTML, Cordova generates an app for each platform which consists of a single web view. You write the app in HTML and Javascript, which gets loaded in the view. Then, there is a Javascript bridge and an interface that you can write plugins for. One downside is that the plugin ecosystem is active and many of the plugins you use have deprecated APIs that are not maintained. Any platform-specific functionality will have to be done through a plugin, including splash screens and native alerts.

React Native

React Native is different than Cordova, in that it dynamically creates native components orchestrated by Javascript. There are only a certain number of cross-platform native components implemented, but it gives you a native look. One of the downsides is that although you are using native components, you have very little control over the specifics of what is going on. This is due to the fact that you are writing code that gets compiled to native, not the native code itself. It then becomes more expensive and time-consuming to fix any bugs, although it might have been a quick fix with direct access to the native code.

Embedded Web Views for Specific Screens

Instead of implementing the whole app natively, you can use web views to show a specific page. For example, you might want to have a screen where a user can view his or her profile information. This page could be loaded from a server dynamically, meaning the page could get updated without the user having to reinstall the app. One of the downsides of having many separate web views is that the startup time is around ~1s even for a sparse page, so the user will have to wait for the information to be loaded.

One of the main pain points of hybrid apps is that it is difficult to “jump out” of the embedded web view or framework to a native view. For example, if you make a single screen native, then it would involve a large restructure to the whole app, since the hybrid app framework assumes your app is only structured a single way. Going native-first is the opposite. It is very easy to make a single page a web view and the rest of that app native.


In general, there is not a clear answer to what is the best approach to mobile app development. Our recommended approach will vary from project to project. Hybrid apps can be a good choice if you want to get a simple, MVP of your app to market as quickly as possible with minimum features. However, native apps will offer the best performance, user experience, and security. We recommend native applications to all of our clients if their budget allows for it. However, hybrid app development can be a cost-efficient way of getting a simple application to market quickly.

SQL Statements on Production

SQL Statements

The title of this blog should frighten most developers. Generally speaking, we want to avoid running a SQL statement – UPDATE, INSERT or DELETE – on production if it is possible. One statement can do a lot of damage if it’s not 100% correct.

But, there are times where due to time constraints or other considerations, it is not possible or practical to create a feature or developer-only tool to make a change. If this is the case, there are a few precautions that can be taken to minimize the potential impact of a statement and to quickly revert changes.

Measuring The Potential Effect

The first step is to check the WHERE clause of your statement. For example, if I have a “users” table with the unique key “id”, and a property “first_name”, I might try using the following to make a change:

UPDATE users SET first_name="Chad" WHERE id>22;

But – first, I can run the following:

SELECT * FROM users WHERE id>22;

From this, I would see that I am affecting way more users than the one I intended to update (unless everyone happens to share my name), and could modify my query to use “=” instead of “>”.


If you’re using MySQL, you may be using innoDB as the storage engine behind your tables (if not, you’re probably running on an older version). InnoDB supports transactions. From the MySQL documentation:

“Transactions are atomic units of work that can be committed or rolled back. When a transaction makes multiple changes to the database, either all the changes succeed when the transaction is committed, or all the changes are undone when the transaction is rolled back.”

I make use of transactions frequently. Laravel’s implementation of transactions helps when several inserts/updates/deletes need to succeed or fail as a group. We can also make use of them when we want to have an additional measure of safety when running a statement on production.

If I missed the “>” in the query above and ran the statement anyways, that would be a huge problem for production. Unless I wrapped the statement in a transaction first:

UPDATE users SET first_name="Chad" WHERE id=22;
#Oh no! I updated WAY more users than the 1 I wanted!

The statements inside the transaction are not executed until a COMMIT is executed, or a ROLLBACK undoes the changes. This provides a safer environment to make changes if you absolutely must run a statement on production. However, this is not a sandbox mode where you can sit there for minutes trying things out and then rolling back when you are done. Rows modified inside of an uncommitted/rolled-back transaction will remain locked by default. So, make sure that you already have the statements you want to execute written somewhere, so you can quickly run them and evaluate whether only the changes you wanted happened.


Running new SQL statements on a production server is still not something we want to do. But in those cases where it cannot be avoided, taking the time to test your WHERE clause and to use a transaction can help you catch errors before incorrect data is written, changed, or deleted. Of course, you should still have automated backups in place, and (if possible) a service such as Amazon Bactrack enabled, to undo mistakes.

Chad Eatman is a software developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

Increasing Efficiency as a Developer


Writing code is easy in today’s world. I’m not even talking about writing quality code, following the Agile manifesto, nor eeking out every possible nanosecond of speed out of your code. I’m talking about punching keys and having text appear on the screen.

There are so many tools out there in today’s world it’s almost staggering. Numerous languages have their own IDEs (Interactive Development Environments) programmers can leverage and customize to write their code efficiently (see Xcode, Android Studio, PHPStorm, etc). Not only that, these editors can also increase the quality of code by offering services like linting, type checking, and providing helpful warnings to the developer. Even if IDEs aren’t your thing, there are extremely powerful text editors a lot of other people have spent a lot of time working on in order to make your life easier. My favorite is Sublime.

Leveraging Your Text Editor

Sublime is a text editor that can be extended with packages (see Package Control). There is seemingly a package for everything and if you’re wondering if there is a package out there for a tool/feature you need, chances are there is. Some of my favorite examples include:

There are so many more. My point is not to show you how much of the Sublime Kool-Aid I drink. Rather, that a lot of dedicated people have already put time into making their own (and in turn, your) lives easier, so why not benefit from them?

So why do I mention Sublime and other aforementioned editors? No matter your tool you should be looking at how your editor can help you. Image writing all your fancy code in TextEdit / NotePad++ (and if you do either of those, then you need this blog most of all). If you aren’t leveraging your editor to its full capabilities, then there is no difference between your editor and those barebones text editing applications. However, I’ve ranted long enough. Now it’s time for action. You might be wondering, “How can I use my editor to code so efficiently that I become the envy of the office?

 Using Hotkeys

Hotkeys! – In my opinion, hotkeys are some of the most underutilized features of modern computers/applications. How many times have you watched someone use a computer and click around the screen for ten seconds to accomplish something you could do in a keystroke? If the answer is never, you are probably the person clicking around. Think about this. I want you to:

  1. Open up a file in my Documents (called example.txt)
  2. Grab the last line of the file
  3. Google that line

    (via @

How do you do it? (This will be geared for Mac users) You could:

  1. Click on your documents folder
  2. Click the “XXX more in Finder”
  3. Scroll till you find the file
  4. Double click
  5. Scroll to the bottom of the file
  6. Highlight the line with your cursor
  7. Copy (you could even right click then click copy for extra clicks!)
  8. Click on Chrome (or w/e browser) in your toolbar
  9. Click on the search bar
  10. Type “”
  11. Click on the google search bar
  12. Paste


BAM! Done in 12 easy steps! Or…

  1. Hit CMD + Space (Mac search feature)
  2. Type “example.txt” (or type until your file shows up as the top query)
  3. Enter
  4. [File is open] Hit CMD + Down Arrow (scroll to bottom – in most text editors)
  5. Shift + CMD + right arrow (highlight last line)
  6. CMD + C (copy) [optional CMD + W to close file]
  7. CMD + Space (or CMD + TAB if you have your browser open)
  8. Type “Chrome” (or w/e browser – not needed if using CMD + TAB) [optional CMD + T to open new tab]
  9. CMD + L (Go to search bar)
  10. CMD + P (Paste)
  11. Enter


Voilà! Saved one easy step! But seriously, the time difference between these two methods adds up over time. The second method has an additional benefit – no mouse required! Which segues nicely into my next point – you do not need a mouse.

That’s a bold statement, so let me back it up. Yes, if you spend your days navigating HTML pages or using Xcode you get a mouse. Everyone else: throw them away! It’s a crutch that prevents you from learning new navigation techniques! Many folks see the mouse as “good enough”, so they don’t take the time to learn hotkeys. It is a rare day for me to find a mouse faster for text manipulation than a keyboard. Plus, you get a sense of pride when learning about new tools and using them in your day to day life. If you have never seen anything like this before, I highly recommend watching a tutorial/computer video where the developer makes liberal use of hotkeys (here’s an example). It’s one of the reasons I started learning hotkeys myself. Remember – everyone starts somewhere.


Of course, there are other ways to increase efficiency. It would take a blog much longer than this to cover everything, but this is a start. I promise if you do these two things, leveraging your text editor and learning hotkeys, then your efficiency will increase. You may also notice your code quality increase – especially if you transition to an IDE. As a programmer, your learning should never end. Take the time to learn ways to improve yourself and be better at your profession.

Henry Dau is a software developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

How To Build a More Secure Web App

Securing Your Web App

At Woodridge Software, security is something that we take very seriously. As technology continues to rapidly evolve, it seems like hackers’ methods of attacking modern software are evolving at an even faster pace. Unfortunately, the average software developer has little to no understanding of the miscellaneous attacks that are used by hackers, nor do they completely understand how to help prevent them. For my latest tech seminar, I decided to review our current methods of securing web apps, as well as areas where we can continue to improve.

The Most Common Attacks on Web Apps

The web has become the largest source of information and it houses the interfaces to an unimaginable amount of data. Due to its ease of access, it has become one of the leading mechanisms for modern cyber attacks.

SQL Injection

via @

Nearly every business has some sort of web application. Whether it be on a private intranet or available to the public, it is important that all of the software developers involved in the making of a web app are security-aware. Among the most common attacks are cross-site scripting (XSS), cross-site request forgeries (CSRF), and SQL injection (SQLI). In order best to understand how to prevent these attacks, it’s important to understand how they can occur. The best way to do that is to simply do some research and write up small examples that illustrate the attacks. They don’t have to be cutting-edge examples – just enough to get the point across. From there, the example should guide you on how to prevent these attacks. Most modern web frameworks have tools built-in for mitigating such attacks, and understanding how these attacks occur will help you to understand how these tools are implemented in order to properly use them.

Cryptography Vulnerabilities

Every developer needs a basic understanding of cryptography.

Cryptography security


Specifically, the differences between asymmetric and symmetric cryptography, a high-level understanding of cryptographically secure random number generators, cryptographic hashes, digital signatures, and message authentication codes. Any modern web application will most likely have some form of encryption (typically, communication over HTTPS, password identifiers stored via cryptographic hash, and sensitive information stored via encryption with some form of integrity check such as using an HMAC). It’s also considered a best practice to implement at-rest encryption on both your physical and cloud resources. Improper usage of cryptography (or no usage) is a sure way of putting your data at risk.

Sessions and Tokens

Whether you’re using cookies for your web app session, or a JSON Web Token (JWT) for a mobile API, keeping these secure is a top priority. All cookies that store a session identifier should have the secure flag set (so they can only be sent over HTTPS) and the HttpOnly flag set so that the client-side JavaScript code cannot access it. JWTs should always implement a blacklist using something like Redis, and the JWT’s should have all the necessary claims set such as nbf (not before) and exp (expiration). Ideally, JWT’s should have a short lifetime, maybe even only one request.


This, of course, is not a comprehensive list and is merely an overview of a few important things to consider when building your next web app. Other things worth mentioning are rate-limiting requests (especially on login and registration), utilizing a Content Security Policy (CSP), ensuring your web application server is configured properly, security logs, etc. At Woodridge, we are no strangers to securing web apps. In fact, we can even help you take your existing applications and get them up to par to pass your next penetration test, which you should be performing at least once a year!

Lorenzo Gallegos is a senior software developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

Custom Software Tools: When Should You Build Your Own?

“Should I invest time learning a new software tool? Or, should I just build my own?” These are questions I encounter regularly as a software developer, but these questions are also relevant to any individual or company trying to make their business more efficient. Software tools are programs that developers use to create, debug, maintain, or support other software applications. These tools are intended to make your workflow quicker and smoother, but sometimes they just eat up your valuable time. If your tool falls into that latter category, you may need to find a new solution or build your own.”

Know Your Tool

Custom Software Tool

First, you need to know your tools and your development options. If you’re having a problem with a common development process, there might be a solution that already exists. Software tools should help make your internal processes smoother. As you evaluate tools, make sure you stay up to date with all the functionality that’s available.

Obviously, documentation and tutorials can be helpful to make sure you understand the basic functionality of each option. What’s less obvious is understanding the intention behind various features. For example, you can use a hammer to slam a screw into a piece of wood, but with more effort than necessary (and with some unintended side-effects). The same concept applies to software tools that are being used in ways that they were never intended for, which can lead to issues down the road, if not immediately.

Identifying Pain Points

When evaluating the efficacy of your tool, you need to think about the areas where your current tool doesn’t accomplish what you need it to, and areas where it causes more pain. Once you’ve identified these pain points, you may have to work around or against the tool’s features. If your process doesn’t match your workflow, then make note of the differences. Documenting your pain points provides insight on whether to choose a new tool or to develop a custom one.

Woodridge uses a variety of frameworks and technologies including Vue.js and Laravel, which were chosen because they lacked the pain points of previous frameworks and technologies. For example, the PHP framework used on some of our older projects made interacting with the database a hassle – the defaults were non-intuitive which ultimately led us to work against the framework. Laravel’s Eloquent ORM made handling data much smoother and its object-oriented solutions made it easier to accomplish simple tasks.


Once you know your tool and have identified your pain points, you may decide that your current software tool doesn’t fit your needs. If you find a tool that resolves your pain points, those which you absolutely need to resolve, then it’s worth exploring a pre-built solution. However, if you look at your tool’s features and foresee several custom modifications, then a pre-built solution may not be the right tool for you. In these cases, you should consider building your own software tool.

The right software tool can make your life easier, but the wrong tool can have the opposite effect. Choose wisely and constantly evaluate your options.


Chad Eatman is a software developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

Advantages Of Functional Programming

Functional Programming

Functional programming is a foreign concept to many software developers. Sure, they may have heard of functional programming before, maybe even seen a language or two, but imperative programming is still their bread and butter. In this day and age, a purely functional language is only useful for heating your computer (for most purposes), but knowing functional programming concepts can be immensely beneficial.


Functional Programming in Modern Languages

While a purely functional language is impractical, functional programming concepts tend to worm their way into imperative (and object-oriented) languages. In fact, most modern programming languages have at least some functional aspects, even if it doesn’t seem like it. Ever used Java? PHP? Javascript? C++ (11+)? All of these have at least some form of functional programming built in because functional programming can be so powerful.

Aspects of Functional Languages

You may have heard of some of these terms before: stateless programming, pure functions, memoization, recursion, and immutable data. All of these terms are features that are native to functional programming, but make their way into other languages. Some of these, like recursion, are things we take for granted. Since these are core concepts to functional programming, I’ll briefly describe each one and why they matter.

Stateless Programming

Stateless programming means that there is no external state governing the execution of the program. There is no state machine or, “If today is a weekday then perform a backup,” which highlights a fundamental feature of functional programming: pure functions.

Pure Functions

A pure function is a function with no side effects. If the same function runs with the same input, it will always give the same output. As you can imagine this would be quite dreamy for a programmer (especially to anyone working with non-deterministic programs). Not only that, it is a critical concept for functional programming because it allows for the compiler to work behind the scenes and optimize the program.

Having no side effects introduces the concept of referential transparency which means that an argument can be replaced by its value and still output the same result. For example, if x = 3 then calling the function factorial x would be equivalent to factorial 3. You may think this is obvious or wondering why I care to mention it but there are several reasons.

The first goes hand in hand with the no side effects argument. In other languages passing a variable as an argument can change it’s value (passing by reference). In a pure language, there are no side effects so passing a variable as an argument will never cause changes (desired or not) to the said variable. (Another reason that the variable won’t change is that the data is immutable, but that will be explained later).


The second reason you should care is because of memoization. “Are you saying that my program is going to send memos for me?” No, but it’ll do something better. Memoization is function caching and it can drastically improve performance. If the program is making a lot of repetitive calls to a function with the same inputs (Calculating a factorial for example), then a memoized function will remember the output of the last time it calculated the result and instantly use that.

Instead of calculating factorial 3 for the 20th time, it will just grab the result from the last time factorial 3 was called and there will be no concern that something will go wrong. As you can imagine, memoization can lead to an exponential improvement in recursive functions.


If you have experience with coding you may have heard of recursion. Recursion is not native to functional programming so why do I mention it? Well for one, to be a functional language you have to be able to support recursion. Also, functional programs can implement incredibly efficient recursion. I touched on this before, but now I’ll give some context.

Functional languages must support pure functions meaning the compiler knows that when it calls a function, there will be no side effects. This lets the compiler know it can rearrange and change the execution of the code in certain ways to maximize efficiency without compromising the result.

More experienced developers might be thinking, “Wow, this sounds great for parallel programming,” and it is! The exact minutia of how the compiler accomplishes this is beyond the scope of this post, but rest assured your functional compiler is working hard for you behind the scenes. Another thing to keep in mind is that all data in functional languages is immutable.

Immutable Data

Immutable data simply means once a variable is instantiated, its value can never be changed. Functional programs “mimic” mutable data by mapping a function onto an immutable data structure and returning a new data structure as a result. Once a variable has been set, that value will never change.

So why have immutable data? Because it is invaluable to multithreading by making thread-safe data. By now you might be thinking, “Wow, it sure seems like functional programming is great for making intensive mathematical calculations,” and you would be exactly right. Functional programming has its roots in lambda calculus and is designed to handle massive calculations efficiently.


While many programmers never use a functional language in their day to day life, knowing about its features and when you can use them should be in every programmer’s toolkit. It is worth a quick Google search to see if your predominant language supports functional features. This blog provides a glimpse of what functional programming does and should give you some context on how they are beneficial. Get those creative juices flowing and see how you can use those features in your day to day programming!


Henry Dau is a software developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

Writing Maintainable Android Apps

Android Apps

At Woodridge, we work with a variety of codebases. These codebases consist of code we have written as well as codebases that we have inherited. We have inherited code from developers that retired or passed away and code from other developers where the code is less friendly to work with. Many of our prospective clients end their current relationship because the app development firm does not deliver on their promise. Specifically, it is the quality of work, lack of knowledge and the lack of best practices by other Android developers. Recently, we inherited several Android apps because other Android developers could not meet the client’s technical needs. These Android apps are extremely complex, require special skill sets or, at a bare minimum, require a great degree of developer ingenuity and resourcefulness. Below are some of the best practices for Android developers to keep in mind while working on new and existing projects. This is not an all-encompassing list, but it is a great place to start.

Best Practices for Maintaining Android Apps

Organized Packages

Use packages! Using packages is one of the best ways to keep your code organized. This helps keep related segments of code logically grouped together and easy to find. For example, group all of your activities together in an activity package. In most cases, you will want to name members of that package in a way that indicates what the package is apart of (e.g. SplashActivity, LoginActivity, SettingsFragment, etc). If done properly, this can minimize the number of public methods and variables you have in your files by allowing you to use package-private methods and variables as an alternative.

Resource Files

Android Apps

All user-facing strings belong in the strings.xml file. This makes adding additional languages, like Spanish, to your app easy. Text changes can also be made quick and easy. Colors should also be stored in the appropriate resource file, namely colors.xml.

Layout Files

Layout files should use the strings and colors from the appropriate resource files and should be named appropriately (e.g. activity_splash.xml, fragment_settings.xml, view_user.xml, etc). They can also be used if you have the same view element in multiple spots like a reusable list item.

Drawable Resources

Android runs on a variety of devices with a variety of screen qualities. In order to ensure your Android app has a smooth look and feel across all phones and tablets, image resources must be provided for all screen densities (mdpi, hdpi, xhdpi, xxhdpi, and xxxhdpi). It is not recommended to copy and paste images from an iOS project into an Android project and vice versa. Separate images should be created for both platforms since iOS and Android apps differ.

Android Apps


Styles are a great way to reuse the style of different User Interface (UI) elements across the project. For example, when you have multiple buttons across a variety of pages that look the same, you can place the single style in the styles.xml file and then all of your buttons can share that style. If you need to make any changes later then you only need to make it in one place.

Remove Unused Code

Removing unused code should be common sense but surprisingly many Android developers write code that goes unused. If it is unused then delete it. Furthermore, removing unused code makes Android apps smaller and the codebase less confusing. Android Studio even provides a mechanism to remove unused resources such as layout files and strings.


Use well-known dependencies to speed up development so you don’t have to reinvent the wheel every time. Never get stuck with a library that hasn’t seen updates in years. If you do, there is a chance your project will break with future operating system updates. One example of a well-supported Android library is OkHttp which makes HTTP requests easy to integrate into any Android project.

Android Apps

Android Developers: Listen to Android Studio

Android Studio is a very powerful Integrated Development Environment (IDE) and it gives some very useful hints so listen to them! Moreover, Android Studio will tell you useful tidbits of information such as your hardcoded string should use a string resource or your public variable should actually be declared as private.


Finally, there are many other things Android developers can do to make their Android app maintainable but this is a great place to start. Above are some of the things that the Woodridge development team keeps an eye out for when performing code reviews on existing codebases of prospective clients. More often than not, we see other developers taking the lazy way out or simply put, they don’t know any better.

We see many items receive a failing grade in the code review which requires a “cleanup phase” when we inherit the project. A little organization and following best practices goes a long way. On a closing note, someone once said, “Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.”


Lorenzo Gallegos is a senior developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

Introducing React Native

Have you considered building a mobile app, or perhaps you have the beginnings of a great idea? Regardless, you’ll have an important decision to make, “Should my app development team build a native or hybrid app?” There are several variables to consider: the initial app launch time, the budget for the app, the app’s complexity, and post Minimum Viable Product (MVP) desires. The answer to this question varies from client to client and in most use cases we will recommend native app development. However, there has been a lot of hype in the mobile development community surrounding a new Hybrid option: React Native.

What is a Hybrid App?

There are many hybrid options that use frameworks. These frameworks allow you to develop an application with one set of code and export the code to develop software on both platforms, iOS and Android. Some of the most popular hybrid frameworks are: PhoneGap, Cordova, and React Native.

What is React Native?

App Development

One of Facebook’s open source libraries is called React. React first came out in 2013 as a way to build user interfaces for the web directly using JavaScript. It’s like any other library that comes with a good amount of ramp-up time but it is backed by a passionate web development community that is steadily growing. Though we have not used React in production here at Woodridge, we understand that React offers a clean component-based style to user interface (UI) development formed around the idea that the user interface should reactively update when data is changed. But make no mistake, React and React Native are completely different libraries.

In March 2015, React Native was open sourced by Facebook. React Native builds off React, bringing similar concepts and structure to mobile UI development, and actually carries over a bit of web development ideas to the mobile scene. In fact, when building a React Native app, you won’t be programming in Swift, Objective C, or Java, you will be programming in JavaScript.

React Native is in the same category of other hybrid frameworks such as PhoneGap or Cordova, though React Native greatly differs in its technology. We have developed several mobile apps in the past using Cordova and the general consensus among the team has been a yearning to employ native development. React Native is technologically different than Cordova and in fact is a technology that we’re very excited for. Simply put, React Native is Facebook’s open source solution for building Hybrid mobile apps with a clean Native UI.

What are the Advantages of Using React Native?

1. Hybrid App Development

The most obvious benefit of using React Native is that it saves money by building both apps simultaneously. You could develop an app once in React Native and deploy it on both iOS and Android. The underlying framework doesn’t distinguish between the two platforms. This is quite promising because native iOS and Android development have very different approaches and it can get expensive. For instance, building a mobile app on both platforms nearly doubles the cost.

The languages needed in each case are also different: iOS uses Objective-C and/or Swift whereas Android uses Java. Finally, all the underlying API’s are different – the way you use the GPS is different, the way to create animations is different, the way you interact with Bluetooth is different. React Native largely gives one code base for both platforms and that certainly is a big benefit.

However, the big caveat is when apps become more complex you end up writing a good bit of native code on top of the hybrid code plus you need plugins for the camera, Bluetooth, and any other item. As the app gets bigger, it’s no longer just one code base, but rather four code bases: the React Native code, the iOS native code, Android native code, and the plug-ins.

App Development

2. Speed of Development

In many cases, React Native will bring your apps to market faster on both operating systems than native development because only one code base will need to be written. Less overall work means that React Native could potentially save you some bucks if your app fits the simpler profile needed for a hybrid app. Once you introduce more complex functionalities (e.g. GPS, video recording, etc.), you may want to consider native app development.

Why Use React Native Over Other Hybrid Frameworks?

1. Native Look and Feel

PhoneGap and Cordova create single-page applications that run inside of the mobile web browser. So really you‘re running web technologies (HTML5, CSS3, JS) to power your user interface. When using a framework like Cordova, your app will not look like it was built for your mobile device platform. On the other hand, React Native has access to native controls and interaction right out of the box. Components such as Text, Button and Slider will map to its native counterpart. Facebook calls React Native a, “Real native app,” in the context that you really are interfacing with the given platform instead of wrapping around it like Cordova and PhoneGap.

2. React Native Performs Better

Since other hybrid frameworks are bound by WebViews, they are also bound by the limitations of a WebView. Since JavaScript is single threaded, Cordova and PhoneGap have the potential to lead to sluggish pages on busy screens. On the other hand, React Native makes use of multiple threads and renders UI elements on its own thread. In React Native, JavaScript is not the Atlas of your app.

What are the Disadvantages of Using React Native?

1. Potential Reliability Issues

The largest disadvantage is reliability in complex applications. This is why we recommend that our clients choose native when they were choosing between hybrid and native app development. We have experienced turbulence in some of our apps using Cordova that really had too much native functionality (e.g. GPS, Camera, Bluetooth, etc). Since React Native is relatively new (about 2 years old), we haven’t used it in production. Unfortunately, we just don’t really know how reliable its interface is with more complex native functionality. Native apps can be trusted to be reliable for this kind of functionality.

2. Facebook Support Uncertainty

The longevity of a React Native app may be in question. Facebook previously offered a development platform called Parse. Many applications were fully dependent on this service yet the service was terminated after Facebook-owned it for a couple of years and decided to shut it down. There’s no indication that the same thing would happen with React Native, but Facebook’s long term support of this project is not guaranteed. Google and Apple will always be updating and progressing their interfaces for the lifetime of Android and iOS smartphones.

3. Recent License Changes

App Development

For clients that may be considering patenting a product that makes use of React Native, more research would need to be done on the current licensing agreement to determine whether or not your license to use React Native could be terminated without warning.


Overall, I was impressed with React Native. I built a demo app to showcase its potential and presented two different designs for the iOS and Android versions that shared almost 80% of the code. My experience with ReactJS definitely helped with learning React Native.

The development team at Woodridge is still skeptical of this framework. Clearly, there are advantages and disadvantages of using React Native but overall the challenges of a hybrid app still exist. We still recommend native app development to our clients in most cases but we will consider React Native when the complexity is very low, the budget is tight and platform calls for the possibility of a hybrid app. Ultimately, a good candidate for React Native would be an app with minimal Back-End or Application Programming Interface (API) work and that doesn’t use the camera, GPS, or Bluetooth.


Tyler Bank is a software developer at Woodridge Software, a custom software development firm located near Denver, Colorado. Woodridge specializes in Web, Android, and iOS development and works with a variety of clients ranging from startups to Fortune 500 companies.

Bash Scripting: What is it?

The goal of development is to optimize the time spent on meeting the client’s needs, whether that be adding a new feature or finding and fixing a bug in the code. For this reason, developers have very specific computer environments so they don’t waste time on repetitive tasks and can focus on the issue which requires their attention. Through the terminal’s shell, developers can perform tedious sequential actions through a single command saving keystrokes, time, and more importantly money for the client. Bash scripting executes a set of specific commands in sequential order to lessen the time spent on repetitive and tedious tasks for current and future projects. Scripts not only save time for many tasks but can also help prevent the programmer from making a mistake which could be harmful to the project.

What is Bash & Bash Scripting?

Bbashash is a command line shell, and a Bash script is a set of instructions within the shell. The terminal, or command line, is an interface where text commands are executed. Although GUIs can provide smooth user experience, the command line can streamline processes since the user’s fingers never need to leave the keyboard. The command line uses a shell to interpret the user’s commands, with the most predominant shell being Bash. The shell is akin to a linguist interpreter – it translates the user’s keypresses into commands the computer can execute. When the terminal is open, a startup script is executed to determine the interpreting ‘language’, set to Bash as default, and the user can add to the script to customize their working environment. A script, simply, is a list of commands which are run in order, similar to a set of instructions for the computer to execute. Instead of inputting commands into the startup script, the user can type and execute each individual command on the command line; however, the user would need to repeat this process for each terminal instance. This type of repetitive action is a key indicator of the need for a script but it isn’t the only reason.

Elements of a Script

The phrase scripting can seem somewhat daunting initially; however, if you’re able to use the command line then writing a script in Bash isn’t a difficult task. If you aren’t familiar with the command line, there are tutorials online as well as manuals and help within the terminal itself. The conventional extension of a script file is ‘.sh’ which denotes shell; however, since the script is a list of commands the actual extension (or name) is irrelevant.

When writing a shell script, the first line of the script, even prior to any whitespace, must begin with what is known as the ‘shebang’ (#!) followed by the path to the shell, Bash: ‘/usr/bin/bash’. When the script is executed, ‘#!/usr/bin/bash’ informs the computer what interpreting language to use and the location in the file system of that interpreter. All shell scripts will share this first line with variation in the location and interpreter (be wary because no white space or characters are allowed to come before the shebang statement). Following the shebang statement, commands may be written as if they were typed on the command line. What’s convenient about Bash scripting is if you are uncertain of a command, particularly if there are characters that may need to be escaped, you can echo the command in the script or copy the command to the command line and execute the command. The option of executing individual commands may not always be available since some commands require another command’s input. However, a simple scenario can typically be set up for most commands to test their functionality, as well as for calling a command’s man page, the manual of the command within the terminal, to view the options and usages.


Conventions are conducive for clean, readable, and modular code. Since conventions aren’t rules that need to be followed, opinions can vary; however, whether you follow my conventions or not, be consistent in your scripts’ conventions. Inconsistency can leave you struggling to understand and edit your own scripts and can cause confusion for others trying to use your script. Following the shebang statement, write a comment describing the purpose of the script and describing if it requires any parameters, variables to be input, and what they are. When executing a script, it can fail silently and continue running through the script which can cause alarming side effects.

By default, a Bash script doesn’t have all options enabled, such as debugging or access control; however, it can be enabled with ‘set -o <option>’. The three I set at the top of each of my script are ‘errexit’, ‘nounset’, and ‘xtrace’ which exit the script on a command failure, exit if a variable is undeclared, and trace the command execution respectively. These stop the script from failing and possibly damaging my system, as well as provide me messages which help quickly identify the problem (not necessarily quickly fix it, unfortunately). When I am confident with my script I remove the ‘xtrace’ option to clear the clutter from the terminal window.

Script variables should be wrapped in curly brackets, ‘{}’, because there are certain cases where the brackets are required and applying to all variables keeps consistency and avoids ambiguity and possibility for error. Script variables should also be enclosed in double quotes, ” “, in at least conditional statements, because if the variable is null or contains white space the script can have unintended results or just break.

Uses of Scripts

Knowing when to write a Bash script versus manually entering commands can be very dependent on the situation. Suppose that the user wants to make a subdirectory in their current working directory and then copy a file into that new directory. If this action was to be completed only once, it would take more time to program the script, set the correct permissions, and execute it; however, if the user planned on doing this action 100 times with a modular naming convention then suddenly the script becomes much quicker than manually typing those commands.

Programming a script could also be more beneficial if the set actions were complicated/possibly damaging, such as the case of modifying large data, deleting anything, or a command with many flags and options set. The reasoning for using a script in these circumstances is to thoroughly review the commands which is easier in a text editor than the command line as well as to remove the possibility of accidentally executing any command. Scripts can also contain failsafes, with the set flags mentioned above which exit on error as well as echoing the command rather than executing and checking if files/folders exist with conditional statements.

Conditional statements and loops are another reason to use a script, since writing these statements on the command line becomes confusing almost instantly. With a script you can have nested loops and conditionals to ensure you’re performing the right command on the right files, filtering files by name, extension, words they contain, last modified, etc. Scripts can also be transferred across directories and even projects, so if there is a set of commands that a user finds themselves running frequently, it would be a good use of a script.

Your first few scripts may take much longer than simply executing the commands in the terminal; however, these are growing pains. Like any programming language, the more practice and time you take to learn about Bash scripting, the easier and quicker writing a script will become. Bash scripts are so versatile that all uses cannot be covered with a single article. The key is when you think there must be a better way when dealing with command line commands, a script will typically be the solution.

Closing Remarks

Bash scripting is a useful tool for a developer to utilize in increasing productivity and managing menial, repetitive tasks. A script, with proper set permissions and syntax, can execute commands in a fraction of the time a user would take. Bash scripting allows a user to elegantly sequence commands together, as well as review commands to avoid potential havoc on data, files, and file structures. Style conventions in your scripts help the debugging process and enable fellow programmers to more easily read the code. In addition, an established convention across multiple developers is conducive for better code, quicker ramp-up time on projects, and easier modularization. Bash scripting does have its deficiencies with a limited set of data manipulation features, with the primary focus being file/folder organization and characters/strings within those files. When you next find yourself dragging handfuls of files between directories, wanting to batch rename or perform a complicated action which manually would take the remainder of the day, think about using the command line and Bash scripts to do the dirty work for you.

Think Like An Attacker


It seems like every day we hear about another security breach in the news. From Target to Ashley Madison, it seems like everyone is a target nowadays (no pun intended). So how do we protect ourselves? The first steps are awareness and education, but specifically for software developers, you must learn to think like an attacker, and in order to think like an attacker, you must learn how security breaches occur.

There are many aspects that go into a typical security breach. Security attacks take time, patience, and lots of information; sometimes they don’t involve any “hacks” at all. Many security breaches simply occur because of a lack of training and gullible people. For example, maybe someone left a sticky note on their desk with their username and password. All it takes is a curious passerby to borrow the credentials and there you go, no hack needed! Most security breaches, however, are typically a combination of social engineering and a variety of malicious code.

From a software developer’s point of view, the objective of creating a new product is typically to make it conform to the designs. However, designs typically contain little security thought and are mostly focused on the overall goal of the project. After all, user-interface specifications are not the place to tell you about how to prevent buffer overflow attacks, cross-site scripting, and the like. This typical “make it look like the designs” approach is not sufficient to secure a system. There are many aspects of development that should be put in place but are often not due to lack of knowledge on how to properly secure a system. This is why being able to think like an attacker is an important goal for all developers.

Importance of Penetration Testing

pen-testSo how does one start thinking like an attacker? A good place to start is reading up on penetration testing. Penetration testing is an important process that is often left out of the software development lifecycle, but when properly included, it can help close a lot of doors in a system which will help keep the bad guys out. Penetration testing is an act of friendly fire (that’s actually welcomed). The job of penetration testers is to attack a system similar to how a real attacker would, with the goal of finding vulnerabilities that can ultimately be patched.

A typical approach to penetration testing is as follows: reconnaissance, vulnerability assessment, exploitation, maintaining access, and covering tracks. The first step is reconnaissance, also known as information gathering. This can include simple things such as acquiring some search engine results, looking at social media accounts, and of course some technical tasks such as network scanning and service identification. Next is vulnerability assessment, there are many tools that can be used to perform vulnerability scans. Once the vulnerabilities have been discovered then exploitations can be implemented to get inside the system. From there, it’s important that access can be regained again so data can be gathered (passwords, codebases, etc). Finally, once the goal has been completed, it’s important to clean up any evidence that any exploitations every occurred.

The key aspects of penetration testing are exploitations. Understanding exploitations, how they are discovered, and how they are prevented is a very crucial aspect of the software development process. If developers understand how various attacks can occur, then it will become second nature to them to preventing them while they develop their product. This will ultimately yield a more secure product overall.

We live in a digital age. Data is the new gold, and to properly protect our data and other technological assets the developers in charge of creating and maintaining these assets must be fully aware of what they are up against. The tools that attackers use are powerful, their techniques are clever, and if the assets are worth enough, they will stop at nothing in achieving their goal.

Browser Compatibility

Maintaining code and appearance for your web app across multiple browsers is not always trivial. Different browsers may only implement certain features and sometimes different browsers will implement the same feature differently. You want to support as many users as possible, but doing so can increase development costs. So how do you choose which browsers you need to support?

Costs and Benefits of Supporting Multiple Browsers

If you choose to support multiple browsers, you may encounter bugs that appear in one browser which are absent in all others. There may be differences in how the developer or designer needs to apply a design to the different browsers. This leads to additional development and maintenance time, which has a corresponding cost. In addition, if your web application has hundreds or thousands of users, not supporting an important browser can increase the number of support emails and calls.

If you have a web app for use by a small number of employees, you might only need to support the latest version of a single browser. This can lower development and maintenance costs. On the other hand, if you’re running an online store and rely upon a large number of sales, making sure your website supports most browsers can bring in more customers. You can choose to not do so and make suggestions, such as “This website is optimized for Google Chrome”, but there’s no guarantee the user will switch from their preferred browser. Even if they do switch, having to do so is inconvenient. So, if you don’t have a small enough group of users to justify supporting a single browser, you need to consider which browsers are worth the additional cost.

Google Analytics

If your web-app is already out in the wild and you use Google Analytics, you can view the browser breakdown for your current users. For our website, it looks like Chrome, Safari, and Firefox are the big three, with IE/Edge also seeing users.


Using tools like Google Analytics provides you with real data that is specific to your user base. With these types of tools, you can quickly get an idea of which browsers you need to support for your users.

Browser Statistics

If you don’t have information through a tool like Google Analytics, a good approach is to target the browsers that are currently the most commonly used. There are a few places you can look for this information, such as Stat Counter and W3Counter. It’s worth the time to examine a few different sites for this information – they all have different sources for their data. The provided percentages may also represent different data. For example, the percentage of users using a specific browser vs. the number of webpages viewed by a particular browser.

For January 2017, Stat Counter shows:


And W3Counter shows:


According to both tests, Chrome and Safari are the important browsers to support. But Stat Counter shows UC Browser third (a browser which, before today, I have not heard of), whereas W3Counter doesn’t show it at all. So, it’s important to compare multiple resources to get an accurate picture of the most common browsers. You can start by supporting the two or three most common browsers and increase support for others over time.

Technological Challenges

On the development side, browsers vary in support for different standards and features. This may result in having to use multiple code and style implementations for different browsers. Supporting older versions of browsers can prevent developers from using newer versions of ECMAScript (used for controlling interactive behavior on a webpage – information on support for the newest version can be found here). There are also new CSS standards and features for designers which may only be usable on newer browsers, such as FlexBox. So, supporting older versions of browsers limits the newer features available to your development team. These newer features may speed up development time, and the absence of some of these features can make certain tasks impossible or finicky for a particular browser. It’s important to have a discussion with your development team about which features of your web app are possible with the browsers you want to support.


There’s a lot to consider when choosing which browsers to support – costs, user base, and technology constraints. Typically, we find it easy to maintain Chrome, Firefox, and Safari (and for some of our customers, we also make sure our code is compatible with older versions of Internet Explorer if their users require it). When choosing which browsers your website or web app will support, it is important to make your decision based upon the best information you have available.

Woodridge Software is now an RCG Global Services Company