Why You Should Consider Node.JS as a Backend Option for Your Project

I. HISTORY OVERVIEW

In the early 2000s, when the web was small, servers were slow and client machines were even slower, developers faced the C10K problem. The problem was about concurrently handling 10,000 client connections on a single server machine. As a solution, multi-process and multi-threaded architectures (a new process or thread per each request) became very popular in mainstream software platforms for web development.

But the web continued to grow (and it still does), C10K goal became implementable on most software platforms and frameworks and the community stated the next goal – C10M problem. As you might have guessed, it’s about dealing with 10,000,000 concurrent client connections, which is a tremendous load.

The C10M problem is still relevant now and developers need new solutions to reach much higher load requirements. Most modern web apps include RESTful backend along with web and mobile apps that consume backend API. This increases load that the backend server should be able to hold.

Async I/O based platforms were created to help developers reach the goal. In this post we’ll talk specifically about Node.js as our preferred platform for high load systems development.

II. NODE.JS OVERVIEW

Node.js was initially created in 2009 by Ryan Dahl and other developers working at Joyent company (main contributor of Node.js). The idea was to use extremely fast Google V8 JavaScript engine and implement system libraries that provide common APIs which are absent in browser environments (as browsers provide sandboxes for JS code actually), like file manipulation, HTTP client/server libraries, and so on.

Because of async nature of JS (which is executed in a single thread and does not support multithreading at all), all of Node.js system libraries provide evented and asynchronous API for I/O operations, like reading a file or sending an HTTP request.

So Node.js itself might be described as an asynchronous event-driven platform. Code written by developers is executed in a single thread and, speaking of web backend development, at each moment of time only single client request is being processed, while all others are idle. However, due to the async nature of Node.js, all calls that are potentially blocking, like execution of SQL queries, have event-based API and lead to switching from current request processing to next awaiting operation (all of this is handled by the event loop).

test

III. COMPARISON WITH “CLASSICAL” SOLUTIONS

Let’s look at “classical,” thread-per-request based solutions, like Java servlet containers (Tomcat, Jetty, etc.) or Apache web server. By default, the whole thread is blocked on I/O operations here. Thus, maximum load that can be handled by a single web server instance is bounded by the maximum amount of threads that the server can handle.

Apart of the fact that creation of a new thread is relatively expensive, threads have quite a heavy footprint themselves. For example, each Java thread requires 1024KB of memory for its stack in 64-bit JVM, so 10K threads require 10Gb of RAM just for the thread stack.

So building highroad systems with such software platforms is still doable, but it’ll require many more server instances to handle the same load.

IV. OUR EXPERIENCE WITH NODE.JS

At DSR we have implemented several projects with Node.js-based backends and have been very pleased with the scalability and performance of the platform.

The strength of Node.js is the async, non-blocking I/O nature. Modern RESTful backends do a lot of I/O operations mostly, without involving heavy computations, and Node.js shines here. While reaching C10K goal requires certain effort from the developers on multi-threaded platforms, it’s just the worst performance level in case of Node.js.

Our experience confirms that Node.js is a stable and developer-friendly platform. Since Node.js 4.0, there are LTS releases now, which are stable and supported for a long period of time. As for developers, we find built-in platform tools, like npm, very handy, and, as for the code, promises cooked with generators are just simply awesome.

Of course, there are some negative points, like small amount of production-ready 3rd party libraries/frameworks. But the platform is rather young and Node.js ecosystem is rapidly growing and maturing. Growth of npm package count also indicates this well enough (see the chart below).

file

We’re actively participating in the Node.js platform community and are looking forward to building more great, high load systems using it.

ZigBee 3.0 Evolution of Things

Another CES has come and gone. The wheels have touched down, and you are likely back home. You and your team have refueled with a few well deserved, solid nights of sleep and it’s now time to reflect on what made CES 2016’ special. Let’s highlight one of the exciting moments; the ZigBee Alliance announcement of ZigBee 3.0.

With ZigBee 3.0, there is no reinvention of standard, sudden updates, or unpredictable changes – we are looking at the refinement of a proven technology. Using natural selection as an example, we are watching the substantial evolution and adaption of ZigBee with the IoT market confirming the technology maturity. Observing changes in the wireless technologies and connected IT business environments, we are tracking the reactions of the ZigBee standard in response to this.

The key features of ZigBee 3.0 include dramatically improved interoperability and strengthened security. DSR has been continuously involved in the implementation of ZigBee Pro since the 2006’ standard and we can confirm that the new features in ZigBee 3.0 are a real game changer, especially the convergence of the application profiles to a unified base device implementation. At first glance, this change is the kind of revolution in ZigBee that casts doubt on the previous specification. We do not view it this way.

Earlier, when the profiles were developed, the market was a union of isolated areas. Which areas, you might ask? Well, let me challenge you to quickly recite the ZigBee profile names. If you’re like us, you don’t like separate smart home, light control, or energy measurement functions. We want the Internet of Things and we now have extremely inexpensive, more powerful microcontrollers to build it with. We don’t need profiles anymore. We need the unified implementation enabled by ZigBee 3.0.

Structural consequence of the profile evolution into a base device approach is strengthening the role of a cluster as a unified application building block (clusters were developed for this, of course). ZigBee Alliance goes further and standardizes device types. For us, this approach becomes quickly rudimentary because all the tools are ready for dynamic discovering of devices. We’re talking about EZ-mode commissioning that is now able to discover all the features of the added device right at the commissioning step. After finding and binding, the application has full details about the joined device and bound clusters, so the device type information could be used only for predictions. What we would like to see instead of standardized devices is the strict, “survival recommendation” list for different groups of devices. For example, recommendations for implementing optional attributes/commands or, more specially, having poll control cluster for sleepy end devices, etc. (see our previous blog post).

Overall, a transformation of the profiles multiplies many times the core and indisputable advantage of ZigBee – mesh networks. Devices that previously joined the different networks will truly co-exist now. The new standard allows ZigBee to keep their status as one of the most energy saving choices. Moreover, with the Green Power feature in ZigBee 3.0, devices without batteries can operate in the network.

In conclusion, to all the benefits of ZigBee 3.0 painless backward compatibility and OTA Upgrade feature guarantee, that neither user nor developer will have trouble with switching to the new standard or supporting old devices. What is the best, now only a ZigBee sign on the device’s box makes sense: not profile, even not ZigBee PRO or ZigBee 3.0. For example, how often do you care about 1.1, 2.0 or 3.0 USB device you buy? That is the same.

What do we have as a result? The mesh self-healing network of green, low-power devices with the unified easy installation mechanism, growing community, and continuous evolution. Isn’t that a synonym of IoT?

The Real Reasons Behind Most ZigBee Interoperability Problems

Interoperability is a buzzword that we hear often when talking about wireless protocols, including ZigBee. Being an already trusted but still young standard, ZigBee itself can raise many questions when reading the official documentation. However, that is not the topic of this blog. With over a decade of experience in wireless communications software development and 7 years working closely with ZigBee, we have seen many cases where although the specification gives adequate description, developers invent their own bicycle. Our extensive experience integrating and working with a large number of sensors from different manufacturers provided us the valuable insight we are sharing in this blog.

The field where there is so much space for creativity and hence mistakes, is the application layer, when profiles join the game.

Let us start with one simple flag – “Manufacturer specific” flag in ZCL header, invalid usage of which may cause a variety of problems. The right way of using it is extending the functionality of ZCL (HA), adding attributes or whole clusters that are not provided officially. For example, we cannot guess, why “Temperature measurement” cluster has a “tolerance” attribute, while the “Humidity measurement” does not. It is about the fact that if you want “Tolerance” attribute in your humidity sensor, you need to make a manufacturer specific attribute. Or, in another example, let’s say you are working on a ZigBee-based pet tracking system. We promise there is no “Animal tracker” cluster in any specification. You will need to implement it yourself and, yes, it will be manufacturer-specific.

The common mistake of using this flag is marking general attributes and commands with it. We faced it while working with IAS sensors and made us wonder why the standard enrollment procedure needs any manufacturer code. Do developers really consider their manufacturer code safer from intruders than the entire ZigBee security system?

Anyway, it can be easily debugged, because the only thing we need to know in this case is the manufacturer code. There is a way to obtain it using only ZigBee tools: the code is placed into the node descriptor. If the node descriptor does not work, it can be requested from the manufacturer. And, when there are no contacts, ZigBee sniffer can help too. If there is a coordinator that the intended device successfully enrolls with, then with the proper enrollment procedure caught by the sniffer, we will get the code. Another way to achieve this is by writing any attribute in the intended cluster and probably getting the response with the code. Moreover, configuring and binding the intended cluster may cause some manufacturer-specific attribute to be reported with the code. So, they key is just to be patient.

This mistake may be worse when the device confuses ZDP discovery tools: for example, the cluster is not returned in a simple or match descriptor response, but some commands are supported and they are manufacturer-specific. In this case, discovery does not work and you will need either a technical contact or a lot of time to experiment.

In this case all we know is that the device in our hands is a ZigBee device and what it is used for. So we can predict its cluster list. The only thing we can do without manufacturer help is to send commands to the predicted cluster waiting for the response with some status.

The next issue has to do with attribute semantics misunderstanding. When the number of attributes exceeds two-three and cluster logic becomes complicated, this can lead to misunderstanding of an existing attribute’s meaning . Just imagine the situation when you try to set temperature on a thermostat but it is still too cold or hot in the room. Now we take this HVAC system and try to guess, which setpoint the “Setpoint Raise/Lower” command operates with? It depends on the command’s mode as well as current system’s mode. But some developers may like only one clear attribute and of course it will cut the existing logic. In this case, specification misunderstanding can even cause attribute duplication.

One of the last common problems has to do with a very useful HA extension – poll control. Even though it is strongly recommended to implement it, it is often ignored. However, real problems come, when the device has its own long poll interval that is much longer than the default one. If we leave the situation as is we will for sure have many packets lost for such a sleepy device. Therefore, we should increase the timeout for deleting expired indirect packets. This does come with a risk: if the interval is too high, the queue most likely will got overflowed. That is why when increasing the indirect queue timeout, updated coordinator should be tested in a large network with a lot of sleepy devices connected.

To close, we want to add a few words about the mistake that will not break interoperability, but can be frustrating and easily avoided. Unfortunately, as of today we do not have as many reportable attributes as we may want. And everybody who faces this problem solves it in his or her own way. We have seen “Write attributes” sent to the client cluster and even reports that were not configured. It is the only problem described here that can be attributed to by the lack of functionality in the official specifications. We are sure this will be addressed in one of the next updates. But we are sure that the devices that skip the configure/bind logic before sending reports will not disappear for many years.

We hope this blog gave enough examples to show that most interoperability problems at the application layer appear because of not completely understanding ZigBee Alliance documents. With the growth of ZigBee technology and the number of well-designed devices, such misunderstanding may make the product less competitive and supported. It is key to take time to understand and follow the standard to avoid these issues and ensure the success of your products.

Latest Custom Software Applications for Media & Entertainment from DSR

As part of our blog, we like to share our recent experience in various industries. Below are two projects that we have worked on in the Media and Entertainment industry.

SDI Graphics Insertion

DSR has recently worked on a project whose purpose was to combine OpenGL application graphics output with 3D video content. High Definition 3D video content was provided real-time as two video streams via Serial Digital Interface (SDI) as unpacked video frames. OpenGL graphics were generated on the fly and the current OpenGL frame corresponding to the current video frame of the 3D content. The output of the combined content was an SDI stream with the same parameters as the input.

One of the projects requirements was to not have more than 1 frame difference between the SDI input and output streams, as well as 2 frame difference between OpenGL output and 3D content.

DSR developed a library that is linked with the OpenGL library and that takes the OpenGL output and combines it with the SDI stream real-time. AJA Corvid44 card was used to work with the SDI functionality. Because this card has a powerful Mixer for video content with Alpha-channel, we were able to use hardware blending that consumed neither CPU nor video card GPU for that operation.

As the project result DSR provided the library with a convenient API, non-blocking architecture and the required differences between input and output frames. Integration of that library did not require any OpenGL application architecture or graphics drawing changes, only slight OpenGL configuration tweaks were required to allow the library to get content in the format it needed.

Automated Datascraping

Another recent project DSR worked on required automation to analyze the online stores TV content for presentation and cost validity. All analysis data, including screenshots of a web page with a particular TV show, had to be inserted into the database to be reviewed by an operator later via an already existing system UI (this is where all analysis work was performed manually in the past).

For this project DSR proposed Selenium technology that allows a web browser to run in a controlled by program environment. Having this technology, a software engineer can emulate TV show searching and its web page analysis, and accessing web page document object model that browser operates using code.

Such an approach can be scaled by having several instances of analysis script with Selenium running, so that can optimize the total analysis time when many TV shows and web stores must be processed.

If you any of this experience is interesting to you or if you have any questions, connect with us at contact@dsr-company.com

Java Enterprise Development – the Technological Journey

At DSR, our expertise with Java Enterprise technologies, Java application servers, and servlets containers is often called upon by a wide array of software development projects. Our trials and tribulations in software development are many, but today we have adopted the Spring Application Framework as the DSR standard for enterprise level applications.

Let me explain why.

In the Beginning, there was Glassfish v 2.0 with SOA and ESB

In 2008, DSR was asked by the digital media industry to create a distributed solution that allowed users to work simultaneously with video editing software tools like Final Cut Pro, and a digital media archive that housed a vast amount of content.

To develop the solution, we started with Glassfish v2.0 having Service-Oriented Architecture (SOA) using the Enterprise Service Bus (ESB), and Business Process Execution Language (BPEL). DSR released several product versions and successfully supported consumers through 2012.

From an engineering point of view, we were satisfied with the technology stack but found that it was too complex of a configuration when using huge XML documents.

From a project management point of view, I believe we could have chosen better after reviewing the cost/efficiency ratio. In hindsight we realized project objectives could have been met with less engineering effort if a less complex stack would have been chosen.

Just a Few Years Ago, There was Spring, GWT, and Tomcat

In 2011, DSR was engaged to create a web-based service with stock exchange logic in the background. Taking into account our earlier learning experience with XML documents, we looked for a powerful, less complicated Java stack that allowed us to build a modern and scalable web-application.

Several server-centric technologies were tried, including, JSF (Java Server Faces) that is provided as par of Java Enterprise Application Server. However, we soon realized our approach didn’t meet the project needs for a good user experience and scalability. We quickly uncovered a client-centric technology and determined the GWT (Google Web Toolkit) showed the most promise in our tests.

For the back-end, we had to fit the integration with a relational database (MySQL or Oracle) and provide a solid infrastructure to manage the solution. After extensive research, we chose the Spring Framework in conjunction with Hibernate, and Tomcat as a servlet container.

From both an engineering and a project management standpoint, we were satisfied with the technology stack and found Spring Framework to be an integration solution that embeds the best parts of Java Enterprise libraries in tandem with a high degree of flexibility to add additional components.

Today, Evolution to Spring and Jetty

In 2012, DSR participated in developing an aircraft engineering configuration software used by workgroups. The engagement had a huge domain model (from a device down to a network package), concurrent versioning functionality with branches, dynamic ACL and rich client side UI to create and manage an Aircraft Engineering configuration.

Initially Glassfish v3 with EJB 3 was chosen for the project. Eclipse Modeling Framework was selected to deal with the domain model, while Eclipse Rich Client Platform provided a client side application. We created a custom JPA persistent provider implementation allowing for a persistent EMF based model through Hibernate. We also used a continuous integration approach based on Jenkins server.

For integration testing, we selected Arquillian technology and after six months of active development, we faced a speed issue in which our integration tests simply took too much time to execute. With a team of 10 engineers, we had several hundred integration tests with 4-hour run cycles that completely blocked our continuous integration approach. In short, our continuous integration server tested the project slower than it was developed.

While investigating solutions (including embedded Glassfish) we uncovered no viable answer. We found the issue could be solved by migrating with Spring while employing a standard Sprint testing approach. Fortunately, EJB 3 is very similar to Spring (actually EJB 3 shows a clear influence by Spring) and we only spent about 2 single-engineer work weeks to port the solution to Spring and its test approach.

Glassfish v3.0 was also replaced with Jetty since we didn’t need a Java EE server to run our server application despite us still using Java EE components. After migration, we continued to develop the solution for almost 2 years, and after successful acceptance procedures, passed the results to the customer.

To conclude, I’d say that with several years of experience using Java Enterprise technologies we have proven that the Spring Framework is a very good choice in engagements where using Java EE platform is not a strict requirement. With Spring, we have satisfied all engineering and project needs faced thus far. Despite Java Enterprise Edition being a good standardization for most popular platforms, (Glassfish and JBOSS application servers used & tested) we found Spring allowed us to provide best in class results.

Spring for a Java Enterprise level solution has become our recommendation of choice, although we remain open to support our partner’s needs and use a Java EE platform where Spring cannot be.

Which Big Data technology stack is right for your business?

If data analysis is one of the core features of your product, then you probably already know that choosing a data storage and processing solution requires careful consideration. Let’s discuss the pros, and cons of the most popular choices, Redshift/EMR, DynamoDB + EMR, AWS RDS for PGSQL, and Cassandra + Spark.

Managed Amazon Redshift/EMR

Pro – It’s fully-managed by Amazon with no need to hire support staff for maintenance.

Pro – It’s scalable to petabyte-size with very few mouse clicks.
Pro – Redshift is SQL-compatible, so you can use external BI tools to analyze data.

Pro – Redshift is quite fast and performant for its price on typical BI queries.
Con – Redshift’s SQL is the only way to structure/analyze data inside Redshift. It may be easier for simple tasks, but to do complex tasks like social network analysis or text mining (or even running custom AWS EMR tasks) you have to manually export all data to external storage (to S3 for example). You then run all your external analytics tasks, and load results back to Redshift. The amount of manual work will only grow with time ultimately making the use of Redshift an obstacle.
Con – Redshift’ SQL dialect for data analysis used is also very limited (as a tradeoff for its performance), the main drawbacks are: missing secondary indexing support, no full-text search, and no unstructured JSON data support. Usually it’s OK for structured and pre-cleaned sterile data, but it will be really hard to store and analyze semi-structured data there (like data from social networks or text from webpages)
Con – EMR has very weak integration with Redshift: you have to export/import all data through S3.
Con – To write analytical EMR jobs, you have to hire people with pricey Big Data/Hadoop competence

Managed Amazon DynamoDB + EMR

PRO – It’s fully-managed by Amazon with no need to hire support staff for maintenance.
PRO – It’s scalable to petabyte-size with very few clicks of the mouse.
PRO – Pricing is opaque and it may be rather costly to run analytical workload (with full-table scans as for text mining) on large workloads.
CON – DynamoDB is a columnar NoSQL store. For most analytical queries, you have to use EMR tools like Hive, which is rather slow, taking minutes for simple queries that typically execute instantly on Redshift/RDS).
CON – DynamoDB is closed technology which is unpopular in the Big data community (mostly because of its prices). We’ve also noticed difficulty finding people with required competences to extend the system later.

Custom ‘light’ solution with AWS RDS for PGSQL

PRO – Postgresql is easily deployable anywhere, has very large community and there’s a lot of people with required competence. You can use either hosted RDS version, or install your own on EC2 – it does not require any hardcore maintenance (like own custom hadoop cluster) and just works.
PRO – RDS Postgresql supports querying unstructured JSON data (so you can store social network data in a more natural way than in Redshift), full-text search (so you can query user’s friends for custom keywords), and multiple datatypes (like arrays which are very useful for storing social graph data).
PRO – Has full-featured unrestricted SQL support for your analytical needs and external BI tools.
CON – PGSQL is not “big” data friendly. Although versatile for small to medium data, our experience has uncovered difficulties when scaling for large datasets sizes. Scaling may be a serious issue later and require non-easy architectural modifications in the whole analytical backend, but may speed up development if data size is not an issue.

Custom ‘heavy’ solution with Cassandra + Spark:

PRO – Cassandra + Spark can easily handle storing and analyzing petabytes of data.
PRO – Cassandra deals with semi-structured data well, which comes in handy when storing social network data like user’s Facebook wall posts, friends, etc.
PRO – Spark has good machine-learning (for example, dimensionality reduction) and graph-processing (useable for SNA analysis) libraries included. Also has python API to use ant other external tools from numpy and scikit.
PRO – As a self-hosted solution, Cassandra + Spark is much more flexible for future complex analysis tasks.
PRO – Spark has SparkSQL which is an easy integration add-on for external BI tools.
CON – Cassandra may need higher tier competences for challenges that arise when scaling which, in turn, may require additional investment in support staff supervision.
CON – Spark is a rather new technology, but it has already positioned itself well within the big data community as a next-gen Hadoop. At present, it may be hard to find people with Spark competency, but the user community is quickly growing, thus making skills easier to find as time passes.

To Conclude

The final choice is dictated by your current business priorities.

If you need to move forward fast with less maintenance routine and are not afraid of later technical debt, we recommend using the “Light” solution or Amazon DynamoDB. If your top priority is system scalability, then the ‘Heavy’ solution surfaces as the clear choice.

The impact of Big Data is really Big

Data analytics (and big data, as a part of it) is not just an ordinary business tool. It is also not just a buzzword, it deeply impacts almost all modern industries. By the end of 2014 the size of Big Data industry has reached 16 billion, with a forecasted value of 48 billion over the next five years. The fact is that as an innovation point big data has much in common with the invention of the internet – it can revolutionize every industry and affect everybody.  

Many industries benefit from data analysis and here are a few not-so-obvious examples:
  • Farming. With the help of drones farmers are now able to collect precise information about the health of the crops, the level of field hydration and the crops growth dynamics. Analyzing that data, they are able to use fertilizers more economically or build a more effective irrigation system. As a result, production costs decrease and revenue is growing.
  • Film making. Prior to filming its TV series project “House of cards,” Netflix has performed an extensive research, trying to determine, who must be the director (Finch), who must play the main role (Kevin Spacey), and what the plot must be (it is a remake of an older series) in order to hit a certain audience. As its IMDB rating is 9.1, it is safe to say that the analysis was executed well and has contributed to the overall success of the show.
  • Oil extraction. Kaggle has built software that helps oil companies to determine how much to bid on a lease of an oil spot or to determine optimal well spacing. Now oil companies are able to make more informed decisions, resulting in improved operational indicators.
  • Professional athlete training.  Using dozens of different detectors and tools, measuring almost every physical parameter of an athlete – blood pressure, heart rate, body temperature, muscle tension – coaches are now able to detect the best training strategy or even predict athlete’s performance.
  • Fighting crime. The Smart Policing program, implemented in 38 different American police departments, funds and empowers local, data-focused, crime prevention tactics. A key feature of the program is “hot spot policing,” which analyzes geographic patterns to uncover highly likely crime locales. 
Big data and data analysis has potential to improve performance and operations of any business. How has your company been affected by the revolution of big data? With the help of data analytics you can make your product offering more competitive, your services more targeted  and  your next steps in the market more strategic. It is very likely that your competitors are already taking advantage of data analytics that are helping to strengthen their product or service offering. If you haven’t already, it is time to consider what big data and data analysis can do for you.

Is IoT finally here? (CES 2015)

DSR has been a regular attendee and a recent exhibitor at CES. Every year there is a little bit of a familiar and a lot of new. With 170,000 attendees and thousands of innovative products in every possible market segment now or soon to be available for end user consumption, CES draws consumers and customers from around the globe. The world continues to evolve and CES highlights creative implementation of technology.

The general CES highlights included innovations in a variety of areas, from curved TVs to personal transporters to innovative wearables, CES was a sight to behold and spoke to every interest and gadget category imaginable. Check out Mashable’s Best of CES 2015 picks for additional highlights.

DSR was representing in the Sands Expo and it was all about IoT (Internet of Things). IoT market has been steadily growing every year and each year promising to be bigger than its predecessors. This year, the IoT (Internet of Things) market was very much on display. Lowes and Bosch were two companies with large full home automation solutions demonstrating the many advantages of the automated home.

Other IoT solution providers, including DSR, were showi2015-01-06-12.25.39-640x480ng the wide range of device (Gateway and Sensor) interoperability. DSR is a ZigBee Alliance member and had our demo set up at the ZigBee booth along with other alliance members.

Traffic was heavy and constant in the ZigBee area all four days of the show and we feel that ZigBee is continuing to pique interest and gain traction amongst wireless protocols and overall HA industry.

We observed the following general trends in the IoT area that will shape the market for the next several years:

  • There is a general consensus that IoT is finally mature enough to truly “take off.” The trend that was long time coming and brewing may finally be here and it is noticeable by the expanded interest from both the consumer and business sides.
  • There were more parties interested in commercial applications (lighting and elevators) than previous years, which is another indicator of the market growing up quickly.
  • In contrast to commercial applications that tend to be larger complex deployments, home owners are looking for small, easy to use full featured solutions. The emphasis in this area is really on the ease of use. This will be the deciding factor of why some solutions succeed and others fail.
  • With people really getting serious about deploying solutions, security is now a common question to address.
  • There is still concern and confusion related to the various IoT standards (ZWave, AllJoin, Home Kit and ZigBee).   People stated it was like the old VHS vs. Beta debate from the past.
  • Some of the common questions that we received at our booth were around the size of the ZigBee stack, number of devices (sensors) and their connectivity range both indoors and outdoors, cost and scalability of DSR IoT cloud, and consumer-facing mobile apps.

Overall, the show reinforced the decision made five years ago by DSR to be part of the IoT market and we are excited about IoT market growth and the opportunity to bring innovative products to market and put the control at homeowner’s fingertips.

DSR is truly a one-stop-shop in the IoT area with solutions for operators/service providers, for access point set-top-box manufacturers, IoT cloud platform vendors, or anyone looking for a complete IoT solution including hardware and software components. Contact us to learn more!

Resolving iPhone apps GPS accuracy problems

We have been creating mobile solutions for our customers for almost a decade. But with each project’s experience come interesting challenges. Recently, we developed an iPhone application that is now available in the Apple Store. It required us to determine a user’s location at various points using GPS. It turned out that in developing GPS-based mobile applications, the accuracy of the coordinates can vary. These inaccuracies appear on the screen as sudden jumps in a user’s location on the map and are especially visible if the application is trying to plot a route that the user is traveling. These inaccuracies are the result of not discarding bad coordinate points and not detecting fluctuation of points in a route.
In certain situations the CoreLocation framework can return points with incorrect coordinates. To deal with this issue, we developed the following criteria to determine that a point is invalid:
  • Object’s location is null
  • horizontalAccuracy < 0
  • Temporary marker of the new location has a value less than the value of the temporary marker of the previous location. This indicates that the LocationManager has returned locations in the wrong order.
  • New location’s temporary marker indicates that it was returned prior to starting the application because the CoreLocation framework caches points from the last time the GPS was used. This can create an undesirable result. For example, the user exits the GPS application, drives 40 miles, re-launches the GPS application, and the LocationManager returns the coordinate point that is off by 40 miles from the user’s current location.
In a situation where the signal is weak or the mobile device is on standby, the CoreLocation manager can return coordinate points that greatly vary from one another. To address this problem, we have applied specific filters (Kalman filter) and interpolation algorithms for detecting these false coordinates and smoothing out the points on the route. However, if you are building an application that requires you to know a user’s precise location without performing the additional analysis, you can use a shortcut that can significantly decrease the amount of inaccurate points – discard all points where horizontalAccuracy > 150 m.
Until next time, when we continue to share our experience in the mobile world!

The Importance of Project Estimations

We wanted to share our approach to project estimation based on our extensive experience on a variety of projects. Early project estimation is the key to proper project definition and communication with customers. It helps establish project attributes (such as cost, duration, resources, and tasks), set expectations, and ensure that all parties involved have the same understanding of the project objectives.

Estimation process is a difficult task for a variety of reasons, including over/under estimation, exclusion of risks, lack of requirements, failure to involve the experts, etc.

DSR’s project estimation is done at a WBS (work breakdown structure) level and each task is estimated down to an hour. Estimating at this level helps identify areas of potential concern and expose inconsistencies between the estimates and client’s expectations. Inconsistencies may exist either due to client’s lack of understanding of the underlying complexities or DSR’s incomplete understanding of a task. Resolving these types of issues early in the process decreases the overall project risk, increases the quality of the estimates, and creates a realistic representation of a project.

During the estimation process, we generally try to keep in contact with the customer as much as needed to get the necessary clarifications on tasks and customer’s expectations. This ensures that the scope of the project is set correctly and increases the quality of the estimates.

At DSR, estimates are always done by resource(s) with most experience in the given task type and are reviewed by other specialists in the company to ensure validity.  In addition, the estimation process goes through several iterations, which allows the customer and DSR to develop full understanding of the project and its execution, which contributes to the overall quality of the project delivery. Using previous projects’ performance and experience to refine the estimates increases the accuracy. Estimates are produced in three measures – optimistic, expected, and pessimistic. The final estimate is a combination of those measures taking into account the risk of each task.

All of these factors contribute to DSR’s project estimation process and deliver our customers quality information about the cost and duration of their projects.