Posts tagged ‘Technology’
All Things Must Come to an End—PatternBuilders is Shutting Down
By Terence Craig and Mary Ludloff
There’s a sad, but true, statistic that every entrepreneur knows by heart: 9 out of 10 startups fail. Unfortunately, PatternBuilders is adding its number to this pile. We have been procrastinating writing this post because shutting down a company is hard. When you put your heart and soul into something, you need time to process, reflect, and eventually get to the point where you can move on.
But moving on does not mean that we are disappearing; after all, shutting down the company does not end our passion for big data, privacy, and all things tech-related (especially IoT). To that end, we will be maintaining this blog, as our main place to write and comment about those issues. We are also consulting around all areas involving big data and/or privacy (via our existing consulting organization, Ludloff-Craig Associates) and are working on some other things that we are keeping under wraps for now. But if you follow our blog, @terencecraig, or @mludloff, you will be the first to know. And if you have interesting opportunities, consulting projects, or for the right company – a full-time job – please get in touch.
There are a number of reasons why we are shutting our doors, but suffice to say, we made some decisions we knew might have an adverse effect on the company. And we stand by those decisions. (more…)
Events to Measures – Scalable Analytics Calculations using PatternBuilders in the Cloud
One part of the secret sauce that enables PatternBuilders to provide more accessible and performant user experiences for both creators and consumers of streaming analytics models is its infrastructure. Our infrastructure makes it easy to combine rich search capabilities for a diverse set of standard analytics that can be used to create more complex streaming analytics models. This post will describe how we create those standard analytics that we call Measures.
In my last post about our architecture, we delved into how we used custom SignalReaders as the point of entry for data into Analytics PBI. We’ve tightened up our nomenclature a bit since our last post, so it’s worth reviewing some of our definitions:
Nomenclature | Description |
Feed | An external source of data to be analyzed. These can include truly real-time feeds such as stock-tickers, the Twitter firehose, or batch feeds, such as CSV files converted to data streams. |
Event | An external event within a Feed that analysis will be performed on. For example, a stock tick, RFID read, PBI performance event, tweet, etc. AnalyticsPBI can support analysis on any type of event as long as it has one or more named numeric fields and a date. An Event can have multiple Signals. |
Signal | A single numeric data element within an Event, tagged with the metadata that accompanied the Event, plus any additional metadata (to use NSA parlance) applied by the FeedReader. For example, a stock tick would have Signals of Price and Volume among others. |
Tag | A string representing a piece of metadata about an Event. Tags are combined to form Indexes for both Events and Measures. |
FeedReader (formerly SignalReader) | A service written by PatternBuilders, customers, or third parties to read particular Feed(s), convert the metadata to Tags, and potentially add metadata from other sources to create Events. Simple examples include a CSV reader and a stock tick reader. An example of a more complex reader is the reader we have created for the University of Sydney project that filters the Twitter firehose for mentions of specific stock symbols and hyperlinks to major media articles and then creates an Event that includes a Signal derived from the sentiment scores of those linked articles. That reader was discussed here.A FeedReader’s primary responsibility is to create and index an object that converts “raw data” received from one or more Feeds to an Event. To accomplish this it does the following:
|
Measure | A basic calculation that is generated automatically by the PatternBuilders calculation service and persisted. Measures are useful in and of themselves but they are also used to dynamically generate results for more complex streaming Analytic Models. |
As the topic of this post is Events to Measures, let’s create a simple Measure and follow it thru the process. For this purpose, we’ll be working with a simplified StockFeedReader that will create a tick Event from a tick feed that includes two Signals – Volume and Price – for stock symbols on a minute-by-minute basis. The reader will enrich the Feed’s raw tick data with metadata about the company’s industries and locations. After enrichment, the JSON version of the event would look like this:
{ "Feed": "SampleStockTicker", "FeedGranularity": "Minute", "EventDate": "Fri, 23 Aug 2013 09:13:32 GMT", "MasterIndex": "AcmeSoftware:FTSE:Services:Technology", "Locations": [ { "Americas Sales Office": { "Lat": "40.65", "Long": "73.94" } } { "Europe Sales Office": { "Lat": "51.51", "Long": "0.12" } } ], "Tags": [ { "Tag1": "AcmeSoftware", "Tag2": "Technology", "Tag3": "FTSE" } ], "Signals": [ { "Price": "20.00", "Volume": "10000" } ] }
Note that there is a MasterIndex field that is a concatenation of all the Tags about the tick. When the MasterIndex is persisted, it is actually stored in a more space efficient format but we will use the canonical form of the index as shown above throughout this post for clarity.
A MasterIndex has two purposes in life:
- To allow the user to easily find a Signal by searching for particular Tags.
- To act as the seed for creating indexes for Measures and Models. These indexes, along with a date range, are all that is required to find any analytic calculations in the system.
Once an Event has been created by a FeedReader, the FeedReader uses an API call to place the Event on the EventToBeCalculatedQueue. Based on beta feedback, we’ve adopted a pluggable queuing strategy. So before we go any further, let’s take a quick detour and talk briefly about what that means. Currently, PatternBuilders supports three types of queues for Events:
- A pure in-memory queue. This is ideal for customers that want the highest performance and the lowest cost and who are willing to redo calculations in the unlikely event of machine failure. To keep failure risk as low as possible, we actually replicate the queues on different machines and optionally, place those machines in different datacenters.
- Cloud-based queues. Currently, we use Azure ServiceBus Queues but there is no reason that we couldn’t also support other PaSS vendor’s queues as well. The nice thing about ServiceBus queues is that the latest update from Microsoft for Windows 2012 allows them to be used on-premise against Windows Server with the same code as for the cloud—giving our customers maximum deployment flexibility.
- AMPQ protocol. This allows our customers to host FeedReaders and Event queues completely on-premise while using our calculation engine. When combined with encrypted Tags, this allows our customers to keep their secrets “secret” and still enjoy the benefits of a real-time cloud analytics infrastructure.
Once the Event is placed on the IndexRequestQueue, it will be picked up by the first available Indexing server which monitors that queue for new Events (all queues and Indexing servers can be scaled up or down dynamically). The indexing service is responsible for creating measure indexes from the Tags associated with the Event. This is the most performance critical part of loading data so forgive our skimpiness on implementation details but we are going to let our competition design this one for themselves :-). Let’s just say that conceptually the index service creates a text search searchable index for all non-alias tags and any associated geo data. Some tags are simply aliases for other Tags and do not need measures created for them. For example, the symbol AAPL is simply and alternative for Apple Computer, so creating an average volume metric for both APPL and Apple is pointless since they will always be the same. Being able to find that value by searching on APPL or Apple on the other hand is amazingly useful and is fully supported by the system.
More formally:
<Geek warning on>
The indexes produced by an Event will be:
where n equals the number of non-alias tags and the upper limit for k is equal to n.
</Geek warning off>
From our simple example above, we have the following Tags: AcmeSoftware, FTSE, Services, and Technology. This trivial example will produce the following Indexes:
AcmeSoftware
FTSE
Services
Technology
AcmeSoftware:FTSE
AcmeSoftware:Services
AcmeSoftware:Technology
FTSE:Services
FTSE:Technology
Services:Technology
AcmeSoftware:FTSE:Services
AcmeSoftware:FTSE:Technology
AcmeSoftware:Services:Technology
FTSE:Services:Technology
AcmeSoftware:FTSE:Services:Technology
The indexing service can perform parallel index creation across multiples cores and/or machines if needed. As Indexes are created, they, and each Signal in the Event, are combined into a calculation request object and placed in the MeasureCalculationRequestQueue queue that is monitored by the Measure Calculation Service.
The analytics service will take each index and use it to create/update all of the standard measures (Sum, Count, Avg, Standard Deviation, Last, etc.) for each unique combination of index and the Measure’s native granularity for each Signal (Granularity management is complex and will be discussed in my next post).
Specifically, the Calculation Service will remove a calculation request object from the queue and perform the following steps for all Measures appropriate to the Signal:
- Attempt to retrieve the Measure from either cache or persistent storage.
- If not found, create the Measure for the appropriate Date and Signal.
- Perform the associated calculation and update the Measure.
Graphically the whole process looks something like this:
The advantages of this approach are manifold. First, it allows for very sophisticated search capabilities across Measures and Models. Second, it allows deep parallelization for Measure calculation. This parallelization allows us to scale the system by creating more Indexing Services and Calculation Services with no risk of contention and it is this scalability which allows us to provide near real-time, streaming updates for all Measures and most Models. Each Index, time, and measure combination is unique and can be calculated by separate threads or even separate machines. A measure can be aggregated up from its native granularity using a pyramid scheme if the user requests it (say by querying for an annual number from a measure whose Signal has a native granularity of a minute). A proprietary algorithm prevents double counting for the edge cases where Measures with different Indexes are calculated from the same Events.
So now you’ve seen how we get from a raw stream to a Measure. And how, along the way, we’re able to enrich meta and numeric data to enable both richer search capabilities and easier computation of more complex analytics models. Later on, we explore how searches are performed and models are developed—you will see how this enrichment process makes exploring and creating complex analytics models much easier than the first generation of big data, business intelligence, or desktop analytics systems.
However, before we get there we need to talk about how PatternBuilders handles dates and Granularity in more detail. At our core, we are optimized for time-series analytics and how we deal with time is a critical part of our infrastructure. This is why in my next post we will be doing a deep (ok medium deep) dive into how we handle pyramidal aggregation and the always slippery concepts of time and streaming data. Thanks for reading and as always comments are free and welcomed!
A Big Data Showdown: How many V’s do we really need? Three!
Marilyn Craig (Managing Director of Insight Voices, frequent guest blogger, marketing colleague, and analytics guru) and I have been watching the big data “V” pile-on with a bit of bemusement lately. We started with the classic 3 V’s, codified by Doug Laney, a META Group and now Gartner analyst, in early 2001 (yes, that’s correct, 2001). Doug puts it this way:
“In the late 1990s, while a META Group analyst (Note: META is now part of Gartner), it was becoming evident that our clients increasingly were encumbered by their data assets. While many pundits were talking about, many clients were lamenting, and many vendors were seizing the opportunity of these fast-growing data stores, I also realized that something else was going on. Sea changes in the speed at which data was flowing mainly due to electronic commerce, along with the increasing breadth of data sources, structures and formats due to the post Y2K-ERP application boom were as or more challenging to data management teams than was the increasing quantity of data.”
Doug worked with clients on these issues as well as spoke about them at industry conferences. He then wrote a research note (February 2001) entitled “3-D Data Management: Controlling Data Volume, Velocity and Variety” which is available in its entirety here (pdf too). (more…)
Our Favorite Reads of 2012
By Mary Ludloff & Terence Craig
Greetings one and all! 2012 was a breakout year for PatternBuilders and we are very grateful to all of you for helping to make that happen. But we would also like to take a minute to extend our condolences and share the grief of parents across the world that lost young children to violence. Newtown was singularly horrific but similar events play out all too often across the globe. We live in an age of technical wonders—surely we can find ways to protect the world’s children.
This is our last post of 2012 and in the spirit of the season, we decided to do something a little different this year. Recently, the Wall Street Journal asked 20 of its “friends” to tell them what books they enjoyed in 2012 and the responses were equally eclectic and interesting. Not to be outdone, Adam Thierer published his list of cyberlaw and info-tech policy books for 2012. Many of the recommendations culled from both sources ended up on our reading lists for 2013 (folks, 2012 is almost over and between launching AnalyticsPBI for Azure and working on our update for Privacy and Big Data, not a lot of “other” reading is going to happen during the holiday season!) and spurred an interesting discussion about our favorite reads of the year. One caveat: Our lists may include books we read but were not necessarily published this year. So without further ado, I give you our favorite reads of 2012! (more…)
Introducing AnalyticsPBI for Azure—A Cloud-Centric, Components-Based, Streaming Analytics Product
It has been a while since I’ve done posts that focus on our technology (and big data tech in general). We are now about 2 months out from the launch of the Azure version
of our analytics application, AnalyticsPBI, so it is the perfect time to write some detailed posts about our new features. Consider this the first in the series.
But before I start exercising my inner geek, it probably makes sense to take a look at the development philosophy and history that forms the basis of our upcoming release. Historically, we delivered our products in one of two ways:
- As a framework which morphed (as of release 2.0) into AnalyticsPBI, our general analytics application designed for business users, quants, and analysts across industries.
- As vertical applications (customized on top of AnalyticsPBI) for specific industries (like FinancePBI and our original Retail Analytics application) which we sold directly to companies in those industries.
“Hadoopla”
I had to miss Strata due to a family emergency. While Mary picked up the slack for me at our privacy session, and by all reports did her usual outstanding job, I also had to cancel a Tuesday night Strata session sponsored by 10Gen on how PatternBuilders has used Mongo and Azure to create a next generation big data analytics system. The good news is that I should have some time to catch up on my writing this week so look for a version of what would have been my 10Gen talk shortly. In the meantime, to get me back in the groove, here is a very short post inspired by a Forbes post written by Dan Everett of SAP on “Hadoopla”
As a CEO of a real-time big data analytics company that occasionally competes with parts of the Hadoop ecosystem, I may have some biases (you think?). But I certainly agree that there is too much Hadoopla (a great term). If our goal as an industry is to move Big Data out of the lab and into mainstream use by anyone other than the companies that thrive on and have the staff to support high maintenance and very high skill technologies, Hadoop is not the answer – it has too many moving parts and is simply too complex.
To quote from a blog post I wrote a year ago:
“Hadoop is a nifty technology that offers one of the best distributed batch processing frameworks available, although there are other very good ones that don’t get nearly as much press, including Condor and Globus. All of these systems fit broadly into the High Performance, Parallel, or Grid computing categories and all have been or are currently used to perform analytics on large data sets (as well as other types of problems that can benefit from bringing the power of multiple computers to bear on a problem). The SETI project is probably the most well know (and IMHO, the coolest) application of these technologies outside of that little company in Mountain View indexing the Internet. But just because a system can be used for analytics doesn’t make it an analytics system…..“
Why is the industry so focused on Hadoop? Given the huge amount of venture capital that has been poured into various members of the Hadoop eco-system and that eco-system’s failure to find a breakout business model that isn’t hampered by Hadoop’s intrinsic complexity, there is ample incentive for a lot of very savvy folks to attempt to market around these limitations. But no amount of marketing can change the fact that Hadoop is a tool for companies with elite programmers and top of the line computing infrastructures. And in that niche, it excels. But it was not designed, and in my opinion will never see, broad adoption outside of that niche despite the seeming endless growth of Hadoopla.
Privacy, Big Data, Civil Rights, and Personalization Versus Discrimination: When does someone else’s problem become ours?
There has been a great deal of media attention on the benefits of big data (just look at our @bigdatapbi twitter stream) lately. Certainly, PatternBuilders has been busy helping financial markets become more efficient, working with data scientists on various research projects, as well as helping other businesses with their big data initiatives. In fact, there are a number of companies (like ours) that are making significant strides in reducing the costs associated with legacy big data systems, helping to move big data out of the early adopter phase and into the mainstream. But as technology innovates, there is usually some “bad” thrown in with all that good. Such is the case with big data and privacy.
Two thought provoking articles on privacy were published this month—both considering privacy through a civil rights prism. In “Big data is our generation’s civil rights issue, and we don’t know it,” Alistair Croll states that:
“Personalization” is another word for discrimination. We’re not discriminating if we tailor things to you based on what we know about you — right? That’s just better service.”
Big Data Tools Need to Get Out of the Stone Age: Business Users and Data Scientists Need Applications, Not Technology Stacks
Things have been crazy at PatternBuilders recently. The excitement and positive reactions to FinancePBI, our Financial Services big data analytics solution, from media, analysts, venture folks, cloud infrastructure partners, and users has been amazing. Our new cross industry graphical big data correlation mashups are generating a lot of excitement as well—we like to call this feature Google Correlate on steroids. Check out how our newest partner analytics consultancy, InsightVoices, has used it to find relationships between stock prices and traffic sensor data.
Mary’s recent post on Strata West 2012 provides a great overview of how hot the hype cycle around big data has become (while managing to work in a plug for her favorite gory TV series as well). In case you’re still not convinced, here are some additional nuggets:
- The market for big data technology worldwide is expected to grow from $3.2 billion in 2010 to $16.9 billion in 2015, a compound annual growth rate (CAGR) of 40% (hat tip to IDC).
- The amount of big data being generated continues to grow exponentially, now being expected to double in two years. This is largely driven by social networks, smartphones, and really cool IP-enabled devices like the Fitbit and this IPhone-based brain scanning device by our new Strata buddy Tan Le at Emotiv Lifesciences. Yes, she is much smarter than us but we like her anyway!
- The White House is even doing its share, investing $200 million a year in access and funding to help propel big data sets, techniques, and technologies while giving a shout out to our friends at Data Without Borders.
Data and Technology Have No Moral Compass: But that does not mean that we get to abdicate all responsibility.
I do not consider myself an idealist and I would not call myself naive. That being said, as Terence and I engaged in research for our book, Privacy and Big Data, there were moments when I threw up my hands and said, “Really?” Certainly, the recent spate of articles on surveillance technologies and how governments around the world are buying and using those technologies to, for want of a better term, spy on its citizens gave me pause.
Don’t get me wrong—I know these technologies exist. I am also very aware that the regulatory environment does not really address what devices or applications built on top of these technologies can do. The reality is that companies like Datong sell “intelligence solutions” to the military, law enforcement, and intelligence agencies around the world. Recently, an article in the Guardian revealed that: (more…)
Maps: Lessons Learned
Recently we’ve been adding new user-friendly features to our platform and I’d like to talk about our map view. In particular, I want to discuss the lessons we learned from the map in the first version of the PAF (PatternBuilders Analytics Framework) versus the one in our new Silverlight client.
You may have already seen some screenshots of the map in our AJAX web client – when we released the first versions of PAF, we integrated with Google Maps to help users see their data on a map for quick comparisons and analysis. It’s always been a helpful tool, but suffered from a learning curve for new users and could potentially confuse people due to the way it displayed data.
Showing time series data on a map is a tricky proposition – the map is already two dimensional, and the addition of the two dimensions of time series analytics takes it into the 4th dimension. As exciting as it would be to see a four dimensional map view (we’d definitely be the only company doing it!), I don’t think most human beings would be able to understand it.