Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Dave Chamberlain 14:55 on April 8, 2020 Permalink | Reply  

    The United Kingdom-wide discussions we must have 

    The United Kingdom has been presented with a once in a lifetime opportunity – to discuss and decide how,  post-C-19 we want to be . Now is the time for strong leadership to take this on, involving the whole of our nation. If C-19 shows us anything, it is that we can’t continue the way we were. We the people need to determine our priorities; what they are, how we fund them and how that money is raised.  For example, we need to ponder things like:

    • The importance of the NHS – we’ve always known it, and now we are experiencing that importance first hand.
    • Rich individuals and corporations are not paying their fair tax burden – my views are:
      • Corporations must pay tax in the country income is earned (none of the Lichtenstein service companies).
      • A progressive tax is needed on the really wealthy – make them pay their fair share.
    • The poor, the old, the disadvantaged are massively undeserved – how can we do better for them?
    • C-19 and the reduction in traffic shows that we need to dramatically reduce the use of private cars – especially in city centers.
    • Bring back proper employment – zero-hours contracts & the gig economy have been exposed.

    There are many such topics we as a nation need to tackle – and we are now presented with the perfect opportunity to start the discussions.

     
  • Dave Chamberlain 11:49 on April 30, 2019 Permalink | Reply  

    Poor quality data – a major cause of ineffective & inefficient AML processes – part 3 

    Poor quality data – part 1

    Poor quality data – part 2

    Starting with the root cause – poor quality data

    The poor quality of data entering the screening process has been identified as the number one issue, we suggest starting there. Incorporating a way of improving the quality of data into the front end of this screening process makes sense. Being able to identify and fix even the common errors shown in – part 2 of this series makes a tremendous difference to screening effectiveness. Increasing screening effectiveness results in having far fewer false positives to deal with. Fewer false positives reduce the need for human resolution, and that in turn enables investigators to spend more time investigating.

    Having 50 plus years of experience helping thousands of customers identify and deal with poor-quality data, FinScan is uniquely positioned to help stem the false-positive tide that is a major contributor to the spiraling costs and decreasing effectiveness of compliance.

    Quality of data built into screening process – the new approach

    It’s vital to ensure that ensuring the highest quality of data becomes an integral front-end of the screening process. It is only by ensuring that clean, trusted data enters the screening process that trusted AML outcomes happen.

    fsp - compliance for blog

    All the common types of data errors we looked at in  part 2 need to be identified and fixed before screening. Doing this effectively requires software with years of experience, learning more with every implementation and becoming more effective as a result. Check out our recently available FinScan Premium and take the challenge.

    Screen early, screen often

    It’s becoming increasingly obvious that the current approach of screening customers only at time of on-boarding, or annually is dated, and not an effective way of ensuring you don’t do business with bad guys. We are seeing, and supporting, an increasingly popular approach to embed screening earlier and more frequently in business processes or workflows – the recent Wolfsberg guidance on sanction screening encourages this approach.  As customer bases change, and as sanctions, PEPs and adverse media lists are constantly updated, continuous monitoring is a far more effective approach.

    The technology needed for effective screening

    Not all screening technology is created equal. We will cover more detail in a companion piece, but to screen effectively (minimise false positives and don’t miss a true hit) requires battle hardened technology, with a set of minimum requirements that include:

    • Built-in high-quality data to provide the highest levels of accuracy;
    • Configurable to deal with multiple screening rules, based on what is being screened, for what reasons, and the risk appetite in that situation;
    • Able to deal with any volume of data, both for the sanction, PEP & adverse media lists but also for the incoming volume of incoming data to be screened;
    • Excels at support for major sanction, PEP & adverse media, UBO and EDD providers, together with straightforward implementation of customers’ own white and black lists;
    • Scales to meet all requirements, from a few hundred names a month to 100s of millions a day;
    • Meet the most stringent data protection and data privacy regulations;
    • Flexible implementation choices; increasingly this means cloud based, but there are numerous jurisdictions and applications that require on premise and of course some customers may need a hybrid approach, with the right mix of cloud and on premise;
    • Has a proven approach to migrating from old, creaky, inaccurate technologies to the latest and greatest.

    Conclusion

    The spiraling cost of AML compliance must be brought under control. As we have seen from the late 2017  McKinsey & Company study, costs are going through the roof, yet compliance is not getting any better. We are advocating the following approach:

    1. Accept your data is not as good quality as you think/hope it is;
    2. Engage with the most experienced company to help – take the challenge;
    3. Start planning and executing a project to migrate to new AML technology and make a substantial difference

    The problems we all see today are not going away. If no action is taken, the problems can only get worse. The bad guys are always finding new and inventive ways of hiding themselves and their money. Increasingly fast-changing regulations cover an ever wider set of requirements with harsher penalties for non-compliance. Finally, volumes of data are increasing, and the quality of that data will not improve of its own accord.

    Now is a good time to start the process of change. We are here to help.

     
    • Mohit Arora 13:25 on June 6, 2019 Permalink | Reply

      Great article Dave!

  • Dave Chamberlain 14:27 on April 15, 2019 Permalink | Reply  

    Poor quality data – a major cause of ineffective & inefficient AML processes – part 2 

    View Poor quality data – part 1

    The data

    It’s vital to start with an accurate assessment of the quality of the data going into screening processes. Doing this we often find that there are high numbers of duplicate records about customers. This is generally due to several, often legacy, systems, containing data about the same customer, but represented slightly differently.

    Duplicate and close duplicate records

    dups

     

     

     

     

    Here we see typical data analysis results. Only 68% of customer records are unique; importantly our analysis shows that 27% of records have been detected as duplicates, and can be automatically resolved. This leaves just 5% requiring human resolution. We need to bear in mind that this is a one-off step, and going forward only new customers will need this same level of inspection.

     

     

    Data errors within records

    unnamed

    Over the numerous years systems have been in place, approaches to verifying, validating and correcting data have evolved. Even so, when you see the number of basic errors and inconsistencies in data, you have to wonder how on earth we allowed this to happen, but allow it we did. In much the same way that the majority of duplicate records can be identified and fixed, we need to be able to do the same with data errors within records.

     

     

     

     

    What next?

    This may seems obvious to everyone, but the rapid increase in the size and cost of compliance teams must be brought under control. The question is what pragmatic, yet highly effective steps can be taken to start this?

    As the poor quality of data entering the screening process has been identified as the number one issue, we suggest starting there. Incorporating a way of improving the quality of data into the front end of this screening process makes sense. Being able to identify and fix even the common errors shown above makes a tremendous difference in the effectiveness of the screening. Increasing screening effectiveness results in having far fewer false positives to deal with. Fewer false positives reduce the need for human resolution, and that in turn enables investigators to spend more time investigating.

    Having 50 plus years of experience helping thousands of customers identify and deal with poor-quality data, FinScan Premium tackles the false-positive problem head-on, helping to stem the false-positive tide that is a major contributor to the spiraling costs and decreased effectiveness of compliance.

     
  • Dave Chamberlain 12:47 on April 15, 2019 Permalink | Reply  

    Poor quality data – a major cause of ineffective & inefficient AML processes – part 1 

    What’s the big issue?

    In a November 2017 study, McKinsey & Company identified poor-quality data as a major cause of banks’ ineffective and inefficient AML processes.

    Three main factors

    The study highlights three main factors affecting banks’ efforts to fight financial crime.

    1. Economies are becoming more closely integrated and the increase in cross-border transactions expose flaws in banks AML processes;
    2. Regulators expanding their reach from organised crime to terrorism and other financial crimes. Rules and regulations are constantly being revised & tightened;
    3. The use of economic sanctions is growing; governments and agencies are expanding their use to include countries together with specific individuals and entities as part of their policies.

    Screening customers and entities against commercially available lists of sanctioned people and entities, PEPs and adverse media is a vital part of AML & KYC and many other processes. Of course the initial screening is not end of it, the latest 2019 Wolfberg screening guidance suggests regular, event triggered re-screening.

    The big challenge

    The effectiveness of AML programs and processes is directly affected by poor quality data. When data about a person or entity to be screened is not of the highest quality, the results are equally poor. At a high level there are three main problems we encounter every day:

    • Poor-quality data – we’ll focus on this and cover what we mean in more detail later, this is coupled with;
    • A constant over-estimation, some might say delusional view, about the quality of data;
    • Inconsistent processes and poor systems to deal effectively with the results of poor quality data;

    No one can question that the biggest problem faced by compliance teams is the rapidly rising cost of meeting their compliance obligations, primarily caused by the overwhelming number of false positives needing human review and resolution.

    Current systems are often expensive, mainframe-based monolithic systems that require significant customer personnel and computation costs. These systems are difficult to upgrade to new releases and have significant maintenance costs.

    World-wide regulatory environments are becoming increasingly harsh and complex. Bad guys find new and clever ways of hiding their activities.

    pie chart before

    This all results in compliance costs rising at unacceptably high rates. False-positives requiring human resolution drive the rising complexity and costs of compliance.  Teams are overwhelmed, and often they cannot keep up with the workload, risking the wrath of regulators – grading the institution as being non-compliant, increasing the risk of associated consent orders and/or financial punishments.

    More of our data related findings in the next post.

     

     
    • Randal J. Skipper 12:51 on April 15, 2019 Permalink | Reply

      Very good Dave!

      Sent from my iPad

  • Dave Chamberlain 12:13 on October 23, 2017 Permalink | Reply  

    Turn the traditional sales cycle on its head 

    How many times have you sat through the product demo? How many times have you seen the same sign on screen? How many times have you seen the same flow of the demo that ultimately and after the excruciating errors, failures (that did’t happen when I tried earlier) and flubs reaches it’s astonishing climax. Problem is that by the time of the big reveal many in the audience have lost interest, are tapping away on their laptops or just not paying attention – ever wonder why that is?

    Outcome Flow Chart

    It’s sort of simple – when discussing, presenting or demonstrating anything start with the climax – start with the astonishing outcome your products/platform/whatever enables the customer to achieve. Now you have their attention. Now you can start the process of describing, explaining & showing how the outcome is achieved and how they can easily achieve the same. Sounds easy, but somehow as a software industry we seem to have fallen in love with and feel the urge to show our wonderful bits and pieces, products and platforms. Frankly customers don’t care – they care about the outcome – and then if they like the outcome, and only then do they care about the process of achieving the outcome.

    Starting with the outcome in discussions, presentations and demonstrations enables faster and more accurate go/no-go decisions to be made. If the customer doesn’t value the outcome you enable them to achieve it’s time to move on and talk to others, saving your customer and you time and money and frankly, boredom.

     

     
  • Dave Chamberlain 10:12 on October 20, 2017 Permalink | Reply  

    Differentiation through a superior customer experience 

    Introduction

    Over the last six or more years much has been written about the value and process of mapping the customer journey. Mapping is a good starting point, because without a full understanding of the current state of anything is it not possible to effect change. The difficulty comes when the journey has been mapped! What next? How do you take the new customer journey and map it down through the layers of business processes and underlying applications that represent the touch points identified in the journey mapping?

    Re-imagining and re-building the customer journey is a vital part of any (not just a  retailer’s) digital transformation. Omni-channel and/or other major projects aimed at changing their business must recognize this. It is not just a simple matter of redesigning the user interface and flow on the web site or releasing a new mobile app. Clients need to understand the ripple-down implications through the layers of their current process applications and IT estate. As a simple example gaining accurate real-time inventory visibility is (or at least should be) an essential part of any omni-channel approach. It’s easy to say but very, very difficult to achieve, the numerous processes affected, the complex and siloed legacy applications that need to be “adjusted” to get the required end-to-end real visibility in real-time. For example, one buy online pickup in store example touches and requires integrations with over 20 different underlying systems – these complex customer journeys need coordinating within and across the applications. This coordination layer – an agnostic business process management layer – needs to be across all the systems and not inside one of them.

    Modelling a customer journey is also about identifying the risks involved; areas that can be improved and also finding and implementing opportunities to better engage the customer or provide new offerings and services in the right context.

    This document is intended to be a starting point for discussing, agreeing and taking to market an holistic approach that includes the five layers that need to be considered and dealt with for success.

    1. Capturing the new (to-be) customer journey
    2. Providing for rapid innovation and change
    3. Contextualising the interactions of the journey for maximum effect
    4. Understanding and implementing the required process changes needed
    5. Integration within and between numerous legacy systems

    Harvard Business Review in 2010 wrote – using customer journey maps to improve customer experience – the article starts “A customer journey map is a very simple idea: a diagram that illustrates the steps your customer(s) go through in engaging with your company, whether it be a product, an online experience, retail experience, or a service, or any combination. The more touch points you have, the more complicated — but necessary — such a map becomes. Sometimes customer journey maps are “cradle to grave,” looking at the entire arc of engagement. Here, for example, is a customer journey timeline that includes first engaging with a customer (perhaps with advertising or in a store), buying the product or service, using it, sharing about the experience with others (in person or online), and then finishing the journey by upgrading, replacing, or choosing a competitor (re-starting the journey with another company):”

    McKinsey & Company in July 2016 wrotea truely omnichannel customer experience – the article starts “Integrating digital and traditional channels into a truly omnichannel offering is even harder—but multiplies the rewards. In sector after sector, companies are asking how they can adapt to the digital world—how they can build more digital capabilities, create more digital offerings, and even become “digital first” organizations.

    But for institutions that have served customers for decades in person and over the phone, digital too often falls short. After the debut of a new app, for example, a jump in sales may not be as big as expected, while hoped-for operational efficiencies—such as a reduction in expensive call-center and in-store customer-support requests—hardly materialize.

    Executives naturally wonder why: aren’t customers demanding digital? Without question, they are. But not to the exclusion of other channels, which remain critically important. For example, as much attention (and fear) as Amazon may generate among traditional retailers, as of early 2016 about 92 percent of retail sales in the United States—the company’s home and largest market—were still taking place in person.

    Customer journey – from imagination to reality

    There is a large and growing body of work making it clear that mapping the customer journey is an essential part of any successful transformation, omnichannel or other major retail project. In our view the required follow-on elements are often missing – what is needed to take the re-imagined customer journey from strategy to execution – making the new journey real.

    Because of rapidly changing and ever evolving consumer expectations, the new customer journey must include several almost non-functional requirements. It must be appealing, it must be intuitive, it must be contextual and it must be fast. With the ever changing consumer expectations, the ability to rapidly innovate – to prototype, build, test and release new capabilities in days not months are essential.

    Context is king – category of one – there are many ways of saying that consumers demand to be treated as individuals. As they move through their journey you need to ensure they are treated in the right context, build your knowledge of their prior relationship with you into their journey, if appropriate make them compelling offers, show you know and you care.

    Copy of journey in context.jpg

    The execution of the journey has numerous implications on existing processes and the new processes that need to be built and (re) orchestrated. Then at the lowest level the requirements for integration within and between systems does not go away – if anything it increases.

    Conclusion

    Re-thinking, re-designing and re-building the customer journey and experience are vital for survival, let alone growth. Our customers must consider the implications the new journey will have on and for the technology stack, there will be numerous new requirements; new integrations across numerous back-end legacy and cloud based systems; new business processes will be created and existing processes adjusted; contextualizing the customer journey is a must for maximum effectiveness; exposing these new and re-engineered processes for rapid internal and potentially external innovation while at the same time protecting the organization goes without saying. The end result needs to be a journey that is not just appealing, but intuitive, fast to respond and contextual – anticipating my needs almost before I know them – all resulting in a compelling experience that enables customers to buy more, more often.

     
  • Dave Chamberlain 13:00 on October 19, 2017 Permalink | Reply  

    How did we get into this data mess? 

    Imperfect data – A historic perspective

    Our world of computing in 1969 was very different from today. In 1969, Dr. E.F. (Ted) Codd published his first internal IBM paper, “Derivability, Redundancy and Consistency of Relations stored in Large Data Banks”, followed in 1970 by the ACM publication, “A Relational Model of Data for Large Data Banks” – the birth of relational databases as we know them today.

    Organizations used to have complete control of their data. With just a few systems (usually to automate back office functions) there was no concept of customer self-service, or integrated supply chains, or third party data feeds, or just about anything we take for granted today.

    Data was generated by professional data entry staff; they took pride in getting the data entry right, with very low error rates. Data was processed sequentially, tapes spinning round and lights flashing brightly; often you could tell what job was being processed by the noises in the computer room.

    What’s changed?

    What’s changed over 40 years? Today the typical organization runs hundreds, if not thousands, of systems spread across large data centers – many of these applications sharing data with external sources, their supply chain, external data feeds and, of course, we are constantly trying to get our customers to do as much as we can get them to do. When you add up 40+ years-worth of growth and change, we can see how organizations have come to have such volumes of “imperfect” data to deal with – data that is full of errors, inaccuracies and inconsistencies.

    SQL has little ability to deal with imperfect data

    In 1969, there was no concept of anything other than data that was perfect. This was a major contributor to the fact that as RDBMS and SQL were being defined, very little allowance was made to deal with errors in data. “Like” or “contains” clauses and “wildcard” characters enable data with known errors to be found and very little else. If SQL can’t find the data people and systems need, then it needs to be searched by hand, so you often find significant human effort being spent – often trawling through databases to find the right data. Some organizations have tried to deal with the problem by building monolithic dynamic SQL search systems, which they typically find are very resource consumptive. These systems take a lot of effort to design, build and maintain, and still end up not being able to find the data.

    The route forward

    If only we could leverage all that we now know about data and go back in time to build RDBMS and SQL with the built-in ability to deal with all sorts of data effectively and efficiently. More realistically, of course, we need a different way to find the data people and systems require without needing to know the multitude of ways data can be “imperfect.” We also need to bear in mind that people are very good at finding data using their built-in ability to see through errors and differences – the only problem is that they work at their own, much slower pace. Providing systems with the ability to work as accurately as humans, yet at the speed of systems, is long overdue.

     
  • Dave Chamberlain 10:35 on October 16, 2017 Permalink | Reply  

    If I ruled the world (at least the sales & pre-sales world) 

    earthinhand

    What drives you nuts? What small number of things would you insist on if you ruled the world? This is my list, offered with the expectation/aspiration that they would become part of the natural way of doing business…

     

     

     

     

    1. Respond to internal requests/questions promptly (within one business day would be a good start) and with as much clarity and detail as possible.
      • How often have you had to chase a colleague up one or more time – only then to get a token/meaningless response?
    2. Before meetings…
      • Prepare, prepare, prepare – know who you meeting, what the organization does, a starter set of questions to ask, a starter set of responses to possible questions.
    3. During meetings…
      • There can be nothing so off putting as the line of laptops across the table… Pay real attention, put your laptop away, ask questions and take notes by hand on paper – just like the good old days!
    4. After meetings…
      • Organize your notes, coordinate with colleagues and respond (within one business day) to the other parties with a comprehensive set of notes and actions agreed to, and don’t forget to ask if there is anything to add.
    5. Value speed over accuracy.
      • With complex documents or responses you know the first draft will need revisions – so get it done quickly so others have the chance to critique.
    6. Put your phone numbers, correctly formatted for one click dialing in your email signature for both created emails and responses.
      • How many times – especially when travelling – have you struggled to find someone’s phone number? And then struggled to easily dial it?
    7. Finally, and this principal applies when not covered by earlier items – make your self and your business easy to do business with!
      • As a simple example, seems to me that asking for my full details and purchase intent to download a whitepaper is much too much. Give me the choice, and if I need help, I’ll contact you – thanks…

    Realizing much of the above is obvious, and most would say of course it should be like this, the question in my mind is why it isn’t…

     
  • Dave Chamberlain 17:39 on October 12, 2017 Permalink | Reply  

    GDPR Readiness – part 4 – not as ready as we think we are or claim to be 

    My previous three GDPR related posts I looked at the following main areas:

    1. It’s a business problem much more than a technical/technology problem
    2. Major organizational and technical transformation is needed
    3. What’s the best way of maximizing chances of success?

    As each day goes by, it becomes apparent that despite what organizations will tell you, they are nowhere near ready for GDPR let alone good data protection. If anything the readiness trend-line seems to be taking a turn for the worse. Let’s take a look and see.

    Earlier this year in Maythe Gartner estimate was that by the end of 2018 over 50% of companies affected by the GDPR will not be in full compliance with its requirements. That’s pretty understandable as the legislation with its 173 recitals and 99 articles is quite comprehensive, and is completely non-prescriptive. It does a great job of describing how the future of personal data protection should look and defines the rights of us data subjects quite explicitly – it does not tell any of the many tens of thousands of affected organizations how to go about doing it. It is all open to a variety of legal and technical interpretations.

    Just four months later – in August, McKinsey found that only one of the 19 participants in their European survey believed his/her company would fully comply by the deadline.

    So in four months we have gone from 50%  to 5.26% being ready – quite an interesting statistic that adds fuel to the overwhelming marketing blitz from just about every vendor, each with potentially their small piece of the puzzle – or not as the case maybe. It seems to me that at least in part the reason for the readiness decline is that people are just starting to fully grasp the size and scale of the organisation and technical remediation needed. As organizations start to peel away the layers of the GDPR and data protection onion, it’s becoming apparent that the solution is not to leave it to IT, or slap on a few extra cyber products. The real solution is a board level on downwards recognition that finally we had better start really caring about and taking care of personal data of EU citizens (after all it’s now pretty clear whose data it is), in whatever role they are dealt with (meaning it is not just customers; employees, partners, vendors etc. are all equally covered and have equal rights).

    Yet at the same time, while readiness seems to be on the decline, the potential upside is becoming increasingly apparent. People buy, and buy more from organizations they trust and buy less from those they don’t trust. In late 2016 an Accenture study looked at the implications of gaining trust in the digital age – and it’s obviously what needs to be understood and executed against. Then in March 2017 a paper by McKinsey on the value of customer data showed that organizations that make most effective use of data substantially outperform their peer group – more good reasons to do the right thing.

    20100213_1214736

    So we now have a very interesting situation. On the one hand we have the 28 country regulators trying to achieve the desired effect with the big stick approach of very large fines and strict timescales. On the other hand the evidence is growing that taking much better care of sensitive personal data is beneficial to our business. Why wouldn’t you?

    And a parting word from Elizabeth Denham, the UK Information Commissioner “If an organisation can’t demonstrate that good data protection is a cornerstone of their business policy and practices, they’re leaving themselves open to enforcement action that can damage their public reputation and possibly their bank balance. That makes data protection a boardroom issue”.

    More on this topic at a later date.

     
  • Dave Chamberlain 11:28 on October 12, 2017 Permalink | Reply  

    Building a high performance “sales” team 

    Having been involved with enterprise software selling for more years than I care to remember, it seems to me that after all those years, very little has changed. Fads come and go, product names and delivery mechanisms change, but fundamentally the sales process is at best flawed and at worst completely broken. I have been thinking quite a bit lately about why this is, after all there is lots of advice and guidance freely available, lots of good people involved, so why the performance gap? In many ways I think the problems we have as an industry stem from the differences between what organizations say and what they do – and they go about building high performance “sales” teams. What they say are all the “right” things – what they do is often completely different. The net results of these differences is the boom-or-bust cycle we are used to seeing.

    An enterprise software company (as do all companies) lives and dies by sustainable and growing sales. How does this happen? Is this just the job of sales? or does each main function need to be involved? and if so how?

    From an organization wide view it seems a pretty straightforward six point plan would make a sensible starting point, for instance:

    1. Determine overall objectives
    2. What’s the strategy?
    3. How does this translate to execution plan?
    4. What needs to measured?
    5. How will results be published?
    6. How best to adjust as needed?

    To me it seems the majorly broken piece is that each major function involved either has:

    • No plan – and therefore executing a random series of “things”,
    • Has either no real plan, or their own version of the plan – meaning there is; no organization wide alignment; no shared set of values to work from; and no shared set of objectives to strive towards and meet.

    A few other thoughts, openness and transparency are key, each of the major functions, for instance:

    • Sales
    • Pre-sales
    • Marketing
    • Channels
    • Product management/R&D

    Needs to have their variant of the six point plan published, their performance measured against stated goals and published for all to see. Some think power comes from secrecy about what they are doing, why they are doing it and how well they perform. My view is that power comes from sharing and having a common goal with well defined objectives and published measures of success, together with (if needed) a remediation plan.

    Some interesting research on high performance sales published in Harvard Business Review – HBR – 5 things in common focuses on the sales team itself, I think the same five characteristics can and should be applied to the each of the major functions that really are (or at least should be) a major part of the sales process and factor highly into overall success – or failure.

    sales life cycle

     

     

     

    This all looks and sounds really simple – right? So what stands in the way? Why is it that just about every enterprise software company acts as a number of disconnected fiefdoms? Not as a unified high performance whole?

     

     

    To me at least, creating an environment where core values and principles are shared across the organization is key. All too often we have seen R&D or marketing or other functions claiming the greatness of their accomplishments, with no regard for the overall objectives, strategy or execution plan… Some by all – not all by some.

    More on this topic later…

     
  • Dave Chamberlain 07:18 on September 28, 2017 Permalink | Reply  

    Do enterprise software vendors need sales reps? 

    Last week I was in conversation with a senior pre-sales person, talking about the role of sales compared to pre-sales – 2015 – pre-sales manifesto part 1, he posed a very interesting question, something along these lines “do we really need sales people?” So that got me to thinking about my life in enterprise software sales and the value brought by the various actors to a sales situation and eventually a hopefully closed deal or two. My pondering started with recalling the wide variety of sales reps i have worked with over the years, if anyone recognizes themselves, believe me, it’s pure coincidence. Just to keep things simple, I’ve organised reps into three main categories:

    1. Well meaning but incompetent – you know the type, on the weekly Monday morning status/pipeline calls (why does every vendor do this?) they have numerous semi-plausible reasons why their deals have not progressed. Customer unexpectedly away, bureaucratic delays, requirements that have changed, project reviews that were not known, last minute competition – the list is literally endless.
    2. Waiting for the phone call – deep in the basement home office, for previous years they’ve been relatively successful, a big deal a year somehow comes along, customer is already established and upgrades, orders more stuff or whatever it might be – rep makes their number and everyone’s happy, the cycle repeats itself until… Rep moves on to greener pastures, now with different set of customers and buying habits, in the first year the big order doesn’t happen, rep misses number, moves on; situation repeats. The slow decline into obscurity and increasingly desperate vendors, eventually becoming a sad, Ratso Rizzo-like character.
    3. Big bad bully – can do anything at anytime with any customer, always make their numbers (at least at their prior vendors), boastful of prior conquests. More content here – around topic of pissing customers (and potential customers off).

    That got me thinking about two major things; 1) what is it customers expect from a good sales rep? and 2) the characteristics of the really successful sales reps I’ve know over the years.

    1. What do customers expect? An interesting question to ask customers. I’d say that with the experiences many customers have with numerous reps from their current and (hope to be) future vendors, the answer is very little. Most customers have become used to the double-speak, missed promises and sometimes downright lies. Most reps haven’t got it that buying has changed over the last 10-15 years. Used to be that the face to face meetings, product literature form the vendor was the only source of information. Now customers are doing over 80% of their upfront research, due diligence etc. before ever contacting specific vendors. When customers buying changes so dramatically, the way we sell should change too – but rarely if ever does it. We do things in the same way we always have, we measure in the same way, we calculate pipeline and ratios of this measure to that measure (thank you analytics) and of course beat the crap out of reps when their achievement to these numbers misses the mark. Is it any wonder then?
    2. Good sales reps come in all shapes and sizes. A few things they seem to have in common are really the basics we have been taught since day one:
      • Understand what you customer (potential customer) is trying to do, and more importantly why they are trying to do it. The why is 10 times more important than the what… Getting to the real underlying why might require several conversations with a number of people, peeling away extra layers of the complicated onion. The why is the real gold – the business rationale for the project.
      • Coordinating skills – often in complex sales situations there are numerous different players – each with the own sets of skills and roles in this particular sales cycle. How many times have you seen sales reps just throw a whole bunch of resources, some with fancy sounding titles, at the situation? Then hope the potential customer will come away with good impression rather than a more realistic assessment of the situation – it’s a CF in the making. Being able to understand who and what skills needed at each stage and how they can help move the sale forward is essential – yet sadly lacking in most.
      • An ability (of course with the required skills to back this up) to challenge the customers understanding or knowledge about what they are trying to do. We see it time and time again that the sales reps who accept at face value might do OK if luck is with them, but… The sales reps that excel challenge their customers conventional thinking about a particular topic. They have a good grasp of where this customer wants to be in 3-5-7 years time and how they can be helped along the path. Of course the line between challenging and over-challenging is a fine line – so sometimes the consequences are you get thrown out – on wards and upwards.

    See my other musings on enterprise software sales and pre-sales; roles and responsibilities; the discovery process and so on – the discovery process and handling the POC/POV – the POC/POV

     
  • Dave Chamberlain 11:12 on May 25, 2017 Permalink | Reply  

    GDPR Readiness – part 3 – maximizing chances of success 

    GDPR logoIn part 1 It’s a business problem and part 2 It’s a major transformational project, I looked at what I think are the key issues organizations are facing. Many still think GDPR specifically, and good data protection in general, is just a bunch of techie projects to be done – while as we have seen it reaches into the heart of the way business is done and is deserving, if not demanding, of board-level concern and oversight. The more people we talk to, the more apparent it is that the size and scope of the problems – and the solutions –  are not fully (understatement…) appreciated. The reality is it’s not just “fixing” IT systems, there are major organizational changes needed in the way people think about and act on your data. Remember the Elizabeth Denham (UK Information Commissioner) quote from an earlier post – this is serious and important stuff.

    “If an organisation can’t demonstrate that good data protection is a cornerstone of their business policy and practices, they’re leaving themselves open to enforcement action that can damage their public reputation and possibly their bank balance. That makes data protection a boardroom issue”.

    There are a seemingly endless set of items to consider, just a few to start with, feel free to use this list as a starter set for your own much longer list:

    • How are your customer journeys affected?
    • What happens when a customer enforces their right to erasure – are there processes in place? How have the risks been identified and documented? What controls are in place?
    • What is the impact of the analytics and profiling restrictions?
    • How are employees going to be informed and kept up to date with current data protection policies?
    • What IT systems are affected by the right to erasure?
    • What evidence will you supply to the regulator on 25 May next year?
    • Will the board really care?
    • What’s our exposure to reputational risk?
    • Where’s the funding going to come from?
    • How will other key, in-progress projects be impacted?
    • What’s the best way of prioritizing everything that needs to be done?
    • Etc. etc. etc.

    So much to do and so little time. In many ways the answer to most, if not all, questions starts with proper planning – fully documenting where you are and where you want to be. There is a tremendous gap between the realized value of unplanned projects when compared to properly planned projects; a 4-minute video illustrates the difference: why bother planning?

    So that begs the question – what sorts of tools and skills do you need to effectively and efficiently plan everything that needs to be done? Bear in mind that there numerous organizational implications as well as IT remediation projects to conduct while you still have projects in flight. Here’s a suggested life cycle of events. There are obviously other steps required – I have tried to identify (IMO) the key ones – steps on your GDPR journey

     

    gdpr life cycle

    Of course, effective planning on this scale requires top-notch tools to be successful. First area to consider is focused on the process side – four major things to think about:

    1. How do you capture and categorize areas of weakness and vulnerability that need fixing?
    2. What will you use to model the current – what are the as-is processes identified for remediation?
    3. Remember to use this opportunity to capture and document an initial record of processing activity – in which processes and how are sensitive data processed. This will be repeated as you move through the journey.
    4. The tool picked for step 2 should be used here as well – model the desired, to-be processes – how should they operate post remediation?

    Once you have the process side covered, start thinking about how the needed process changes affect the underlying IT systems and your overall Enterprise Architecture. Without doubt the long list of projects and the trickle-down impact will need business value based prioritization and funding.

    And in conclusion… in many ways GDPR and good data protection is no different than any other major transformational project – with higher stakes. In the next publication I will dive deeper into some of the more, shall we say nuanced areas for your consideration.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel