The education system is so much more than merely an education system. In fact, it serves many different purposes. For example, in the early years, the education system acts as the parents when the parents are not present (especially in broken households – even more important). The education system also serves as an allocator of human capital while fostering a foundation for critical thinking and providing students with the tools necessary for the future. However, due to structural factors in the education system and the pace at which the world changes, there can be a disconnect between human allocation/tools needed and provided, leading to misallocation. This misallocation can cause 1) Students to obtain degrees they do not utilize, 2) Over emphasis on obtaining more education, 3) Over emphasis on attending more prestigious schools, 4) Meaningful student debt due to frictions and 5) A disconnect between the human resources/tools that are needed versus provided.Continue reading “🔺More than an Education System, Misallocation of Human Capital, Top-Down/Bottoms-Up Disconnect between Human Allocation/Tools Provided vs. Needed & Build-Up of Student Debt”
I read an article by Paul Krugman describing the impacts of hyper-globalization and how that impact is largely behind us meaning continued globalization is unlikely to have the same impact as it did before in terms of seeing industries completely vanish in the United States. This suggests that from an S-Curve function perspective globalization is on a more mature phase of development considering the degree of saturation.
Initially, globalization caused many commodity-like industries to move to lower cost regions of the world because they had a comparative advantage in labor costs, some of which was unethical from a human rights perspective. That being said, this allowed the United States consumer to maintain purchasing power because those imported goods were now cheaper than what could be made domestically. This needs to be taken into consideration with the fact that wages have not increased meaningfully for consumers since the 1970’s adjusted for inflation. Although, it should be noted that the wage declines are also due to jobs transitioning overseas and workers being unable to find equal-to-or-higher paying jobs as well as a decline in workers’ bargaining power as Corporations are viewed as entities and employees have merely become invariable costs. Moreover, the hyper-globalization phase is commiserate with China’s rise to prominence.
Please see my reports on Inflation and China (ask for Password on Protected Posts):
🔺New Model: The Dynamics of and a 11 Factor Model for Inflation https://internationalcapitalmarkets.org/2019/09/09/the-dynamics-of-and-a-model-for-inflation/ via @Diamond1_CEO
🔺The 11 Factors to China’s Rise to Prominence on the World Stage https://internationalcapitalmarkets.org/2019/08/21/understanding-chinas-rise-to-prominence-on-the-world-stage/ via @Diamond1_CEO
Despite the purchasing power benefits of Globalization, the hyper-Globalization phase eviscerated jobs at such a pace whereby the ability to transition lost jobs and to retrain workers was slower than the rate at which jobs were transferring overseas to lower cost regions creating forced displacement. During this period, it would have been necessary to have a strong social safety net along with a program to transition workers into new markets/industries where such skills were transferable at equal-to-or-higher wages.
Please see my reports on Businesses as Innovations, the Use of Subsidies to Spawn New Industries and Markets, and Forced Displacement:
🔺Businesses are Innovations in and of Themselves and Job Creators https://internationalcapitalmarkets.org/2019/09/12/businesses-are-innovations-in-and-of-themselves-and-job-creators/ via @Diamond1_CEO
🔺Part 1: Subsidies should Spawn Strategic or New Industries, Not Support a Below-or-Equal to Cost/Comparative Advantage Market Price for Commodities https://internationalcapitalmarkets.org/2019/10/11/%f0%9f%94%bapart-1-subsidies-should-spawn-strategic-or-new-industries-not-support-a-below-or-equal-to-comparative-advantage-market-price-for-commodities/ via @Diamond1_CEO
🔺New Model: Impacts of Forced Displacement and Built-In Solution https://internationalcapitalmarkets.org/2019/09/03/impacts-of-forced-dispacement/ via @Diamond1_CEO
Although the pace of globalization has slowed, to me, the pace at which automation occurs could have similar dynamics of forced displacement like hyper-globalization if a hyper-automation phase were to occur. Therefore, I think it’s a useful analog to draw comparisons.
The degree and pace of automation needs to be buffered against the social safety net and the pace at which Strategic and New Industries are created whether by the emergent properties of markets or subsidies.
In general, cryptocurrencies and blockchain have certain features such as transparency, encryption, safe contracts, built-in trust protocol, etc. that are quite favorable which can increase efficiency, reduce layered costs, etc.
That being said, there are some things that should be understood about cryptocurrencies: 1. Mining cryptos is extremely energy intensive – this limits the expansion of cryptos, creating an advantage for incumbents, 2. The current energy grid lacks the efficiency to handle such a load which is why renewables and energy efficiency are really important because they tie into the success of cryptocurrencies, 3. The current craze for “cryptos” is similar to the “.com” craze, 4. Some of the fund flows into cryptocurrencies are due to “risk-off” sentiment while also serving as safe-havens from governments, 5. The government does not particularly like cryptocurrencies, to be fair, this is partially due to some of the ICO scams, limited regulatory oversight, etc., but also due to the fact that it is a threat as it is not controlled by the government, and 6. The value of cryptos is tied to conversion/expansion/transaction of crypto payments, investment value, investment value perception, safe-haven demand, distrust in fiat currencies, and speculation – all of which I would like to expand upon further.
For simplicity, cryptocurrency conversion and adoption is faster in the technology sector, certain aspects of the financial sector, mobile payment systems, apps, digitial platforms, and societies benefitting from a leapfrog effect. Conversely, cryptocurrency conversion and adoption is slower in areas with entrenched POS systems and/or dominated by cash payments still. Ultimately, as conversion increases, so does the underlying value. However, layered on top of the conversion value are investment value, investment value perception, safe-haven demand, distrust in fiat currencies, and speculation.
At a minimum, the base value of cryptocurrencies is the conversion value. As this increases, it leads to investment inflows which further increase the value via positive reinforcing factors. Also, if investment value increases, it improves perception which also leads to increased conversion and adoption. These dynamics are interchangeable and provide a minimum floor value. Next, safe-haven and lack of trust in fiat currencies can then be layered on top of the conversion/adoption, investment value/perception cryptocurrency value. Collectively, these are fairly stable and less volatile values.
However, when the bulk of cryptocurrency value is merely speculative and to make profits from trading, the value is very volatile, unsustainable, and will not succeed. Pay attention to cryptocurrencies that increase in value but conversion and adoption are stagnant or declining as that is a clear sign it is highly speculative and not supported by any fundamentals.
In its simplest form, there are only three fundamental ways to extract money from the property markets: 1. Sell, 2. Rent, or 3. Borrowing against the Equity. Those fundamentals are largely driven by seven key fundamentals: 1. Demographics, 2. Household Formation Rates, 3. Rising Incomes, 4. Zoning Policies, 5. Immigration, 6. Foreign Capital and 7. Underwriting Standards. Those seven fundamentals largely drive supply and demand for property. But, the 2007-2008 housing crisis led many to believe that financial engineering and securitization obviated those principles, which was untrue. While not the primary cause per se, it was one of the leading factors to the 2007-2008 financial crisis discussed below.
Leading up to the housing crisis of 2007-2008, housing performed relatively well as an investment driven by demographics, rising household formation rates, rising incomes, and, in turn, rising housing prices. As a result of such prosperity, many people were told that housing was one of the best investments that a person could make. Despite such assurances coupled with rising housing prices, the fact still remained that the only way a homeowner could extract that value from their property is either by 1. Selling, 2. Renting, or 3. Borrowing against the Equity.
The securitization of mortgages was supposed to offer an investment opportunity while also diversifying and reducing the risks. Yet, no matter how various tranches were created to reduce risks, a group of investors purchasing securitized mortgages does not reduce risk by any means as the risk still remains in the aggregate commiserate with the underlying which is the homeowners’ ability to pay the mortgage. What securitization did do is no longer hold banks and underwriters accountable for their risks and this is the primary factor that led to the housing crisis of 2007-2008.
If the lendors and underwriters make very poor loans, that will impact their portfolio’s, underwriting standards, solvency ratio’s, loan-to-value ratio’s, etcetera. The securitization process meant lendors and underwriters did not have to hold mortgages on their books as they could simply be reconstituted and sold to investors which opened pandora’s box. As underwriting standards deteriorated, loan origination increased, while more and more people were owning homes who did not have sufficient incomes to pay the mortgages, let alone adjustable rate mortgages. This also led to increased speculation and proliferation of “house flipping” which caused housing prices to become untethered from the fundamentals. Moreover, the increases in housing prices and, in turn, household equity created an illusory cushion that masked the rise in loan delinquencies.
Once the positive reinforcing factors that were created from securitization and lack of accountability dissipated, the process began to reverse itself creating systemic and converging losses impacting both the real and financial economy.
The underlying structure of the stock market is asymmetrical in nature and structured in such a way that there is more of an upward bias as it is easier to buy a stock than it is to short-sell a stock. This is on purpose and largely due to two factors 1). The fundamental role the stock market is supposed to play in terms of capital formation via IPOs (although most start-ups exit via the private market – private equity uses debt – bit of a misnomer) and 2). Maintaining support for the dollar vis-a-vis finacialization. The higher, more liquid, and less volatile the market, the more companies tend to IPO. Yet, when the sole purpose becomes the management of expectations via repitition to maintain confidence at the expense of fundamentals, it becomes nothing more than a confidence game. If the stock market was structured symmetrically, the ability to buy would be just as easy as it is to sell. Not only is the structure of the market asymmetrical, so is the flow of information.
Information Flows/Sell-Side Catch-22:
The information inherently flows 1). From the companies 2). To the regulators, sell-side/investment banking analysts, industry publications, and/or media outlets 3). To the investment community (buy-side) whom composes large institutions more so compared with individual investors.
Why is this important? It is the large institutional money that moves markets, not individual investors. More importantly, the asymmetries of information place the advantage with the companies. As a result, companies seldom stray from their investor relations slide decks and the forward financials received by the sell-side which are used in their models do not stray far from company guidance in most instances.
Why is this? Because, the sell-side is in a precarious position being an intermediary between the companies and investors. If estimates are too high or too low, sell-side analysts find they are unable to have their questions answered during conference call Q&A’s. Many buy-side analysts bemoan that sell-side analysts are biased offering more Buy-or-Neutral recommendations at the expense of Sell recommendations. Well, you have to take into consideration the asymmetrical structure of the market and the fact that issuing a Sell recommendation typically confers limited access to managment meetings between the company and buy-side whereby the sell-side acts as the liaison. Yet, even in its bemoaning, the buy-side highly values such meetings with management. Therefore, the sell-side finds itself in a Catch-22 between both the companies and buy-side. As a result, the asymmetrical structure of the market and information flows limits the usefulness of the current valuation models. The most common valuation models employed to derive the estimated values are discounted cash flow (DCF) and multiples, typically being used in conjunction.
The Limits of Current Models:
Discounted Cash Flow:
Discounted cash flow models provide greater detail of the inner workings of a company compared with multiples on a standalone basis, which is why the value using multiples is typically derived based off of 1 year DCF estimates (derived from the sell-side which is derived from company guidance as mentioned previously) against historical company and sector multiple averages. However, I think modeling 10 year time horizons for DCF’s is limited because error rates grow exponentially the further out into the future you go. For example, the error rates of future estimates to be included should be cut-off at 10%-20%, yielding success rates of 80%-90%. Additionally, if you could see 10 years into the future, you wouldn’t be at that desk making estimates of what earnings would look like 10 years into the future to begin with. It makes more sense to a utilize 2-3 year time horizon, or 3-5 year time horizon at most with frequent updates (contingent upon the cone-of-uncertainty and maintaining a 80%-90% success rate mentioned previously). Moreover, the differential between an estimated stock price using a 10 year DCF and 5 year DCF is not that meaningful as the bulk the value is in the the terminal value itself. Additionally, DCF’s do not provide much value for early stage companies that are cash flow negative. But, there is a big difference between having a business that is cash flow negative in an early growth stage and a business model that becomes more cash flow negative the more it grows, ahem, WeWork…..which brings me to multiples.
Historical Company and Industry Average Multiples:
To reiterate, discounted cash flow models provide greater detail of the inner workings of a company compared with multiples on a standalone basis, which is why the value using multiples is typically derived based off of 1 year DCF estimates (derived from the sell-side which is derived from company guidance) against historical company and sector multiple averages. All the multiples do is convey a relationship of what things go for in the market relative to a fundamental.
Investors simply form a model of what things go for in the market based on multiples similar to going to a flea market and finding an item that is cheap which you know sells for a higher price in the market, simply arbitraging the price differential between the two. This stylized way of thinking is limited. The dynamics are the same when using a historical or industry average multiples approach.
Historical Company Average Multiples
When looking at a historical company average multiples approach, you simply overlay the business cycle, factors for improved company fundamentals to understand the historical multiple while taking into consideration the Molodovsky Effect. By using an average, you are implicitly breaking that down into a 50/50 scenario whereby with new information you merely switch the weighting to reflect the new estimated price target. It’s important to delineate if performance is consistently above and/or below average, otherwise the valuation will be peristently inflated and/or deflated.
Historical Industry Average Multiples
When looking at a historical industry average multiples approach, you simply overlay the business cycle, factors for improved company fundamentals in relation to historical industry multiples while taking into consideration the Molodovsky Effect. By using an average, you are implicitly breaking that down into a 50/50 scenario whereby with new information you merely switch the weighting to reflect the new estimated price target. It’s important to delineate if performance is consistently above and/or below the industry average, otherwise the valuation will be peristently inflated and/or deflated.
Lastly, it seldom makes sense to compare a company multiple with the market as a whole as a result of the idiosyncratic risk for a individual company coupled with the manner in which the multiples are weighted for the market as a whole.
Key Points Regarding the P/E Ratio:
Considering the manner in which “E” in the P/E ratio can be manipulated upwards, it makes sense to add back those factors that make earnings seem better than they are. It is also important to understand the manner in which the multiples are derived. For example, is the high “P/E” multiple the result of “P” increasing or “E” decreasing. Conversely, is the low “P/E” multiple the result of “P” decreasing or “E” increasing. Distinguishing between those factors is important in conjunction with earnings surprises and modeling of positive reinforcing factors.
Key Points Regarding Clicks and ESG
It is clear that companies are not solely valued in the markets based on their ability to generate earnings based on 1. Tech companies that are valued by clicks or 2. Environmental, Social, and Corporate Governance (ESG). In both instances, the only reason stocks can maintain value based on clicks or ESG is due to conditioning. Despite those similarities, the two approaches yield vastly different outcomes. Eventually, a company based solely on clicks will fail. However, ESG offers a set of criteria that allows stocks to maintain and/or increase in value based on a set of parameters without undue influence nor at the expense of earnings.
Please refer to the following report on investor expectations and arbitrage: Investors Make Investments with the Expectation to Make Money – Otherwise they Wouldn’t Make the Investment in the First Place https://internationalcapitalmarkets.org/2019/09/11/all-investors-make-investments-with-the-expectation-to-make-money-otherwise-they-wouldnt-make-the-investment-in-the-first-place/
Anybody who has ever had to make a decision (basically everyone on the planet) will eventually encounter the insidious Catch-22. What is this Catch-22 you speak of? Glad you asked. Well, let’s refer to our trusted friend Webster:
Catch-22 (noun) – A dilemma or difficult circumstance from which there is no escape because of mutually conflicting or dependent conditions.
Said less Websterish, a Catch-22 is a paradoxical situation whereby the decision maker is damned if they do and damned if they don’t. In chess parlance, this is typically when the victor declares checkmate. Or, should you fancy UFC moreso, this is when a Tap Out occurs. In other words, regardless of the action taken it seems that 1. A loss or 2. The least favorable outcome relative to expectations is going to occur either way.
Now, depending upon 1. How early it is detected, 2. The magnitude of the all-in cost (explicit and intangible/opportunity), 3. The decision makers’ flexibilty, 4. Severity and 5. The mental fortitude to take losses and still stay focused, a Catch-22 does not have to result in a checkmate situation, although losses may still occur. The more important factor is limiting cascading losses and being positioned in such a way that offers greater optionality and the ability to anticipate.
Then again, you might even propose that early detection of a Catch-22 is a Catch-22 in itself? Right? For if you knew of the Catch-22 in advance, you could avoid being in such a circumstance to begin with. Therefore, you do not realize you’re in a Catch-22 until it arrives, at which point its too late. Touche my good friend. Touche.
Well, you can either wait for the Catch-22 to arrive and, for lack of better words, shitt your pants, or you could 1. Discover your intangible/opportunity costs now and get in contact with International Capital Markets, 2. Where the greatest pressure points exist, 3. What the second and third order derivatives of the pressure points are and then 4. Model a range of scenarios based on past precedents and other appropriate models while mapping out how intangible/opportunity costs are likely to evolve and if there is any co-dependency. As Mark Twain said, “History doesn’t repeat but it often rhymes.”
At this juncture, there might be 5-20 scenarios that have to be evaluated when taking into consideration the first, second, and third order derivative pressure points, as well as intermediate weak-links. Each evaluation requires looking at both the explicit/intangible/opportunity costs and how these look under both maximum positive/negative rates of growth to identify where any exponential growth exists within the pressure points, etc.
Ultimately, you want to choose the outcome that offers the net lowest combined explicit/intangible/opportunity cost. The worst mistake you can make is ignoring the opportunity costs and simply choosing the outcome that offers the highest net benefit as that is often a pyrrhic victory.
For example, let’s say you have 2 proposals. Proposal 1 has a benefit of 10, cost of 5, and intangible/opportunity cost of 2. Proposal 2 has a benefit of 20, cost of 10, and intangible/opportunity cost of 8. Most people will choose Proposal 2, but Proposal 1 is better because the true net benefit is 3, while Proposal 2 has a true net benefit of 2. Lastly, if Catch-22s are not managed properly, the situation can cascade whereby Catch-22s begin to coalesce.
Please refer to the following report on cutting losses: A 50% Gain Doesn’t Equate to Breaking-Even on a 50% Loss https://internationalcapitalmarkets.org/2019/09/14/a-50-gain-doesnt-equate-to-breaking-even-on-a-50-loss/
Lastly, this is absolutely NOT about creating What If scenarios into oblivion. “Remember, the hero is never the person that puts out the forest fire before it ever started.”
One of the most important and often forgotton concepts is the percentage relationship between gains and losses. Why is it so important? Because the relationship is non-linear. Therefore, required gains to break-even grow exponentially as losses increase in percentage terms.
For example, if you bought a stock that is priced at $10 and it goes down to $5 that is a 50% loss. It might not seem like much as the decline in nominal terms is only $5, so it should not be all that difficult for there to be an incremental $5 increase to get back to $10 at which point you break-even, right? No. The required increase in percentage terms from $5-to-$10 is actually 100% and for the stock to increase from $10-to-$15 for a gain of $5 from the initial cost basis of $10, but at the current price of $5 is 200%. As you can can see, the gains that are required to break-even or generate a return are not proportional to the loss in percentage terms, although it seems simple in nominal terms. How is this knowledge helpful and applicable?
The above link demonstrates the required percentage gain required to break-even on percentage losses (5%-100%). For example, at a 5% loss, the required gain to break-even is 5.26%. At a 10% loss, the required gain to break-even is 11%. At a loss of 20%, the required gain to break-even is 25%. At a 30% loss, the required gain to break-even is 43%. The required incremental gain to break-even went from 0.26%-to-13%.
Based on this data, nobody should accept losses beyond 20% as the required percentage gains to break-even really begin to grow exponentially. Additionally, a person could easily look at their performance and set additional boundaries based on their maximum and average percentage gains. If the maximum percentage gain on any project has never been above 20%, then there should not be any reasonable expectation to believe that break-even could occur on losses of 20% or greater. If the average gain on projects has been 10%, then its also reasonable to set a boundary of automatically cutting your losses at 10%-15% on projects and move-on.
That is why it is better to cut losses sooner rather than later as the opportunity costs compound exponentially. Additionally, by setting such boundaries, it limits the ego, cognitive errors and human emotion.
It should be noted that the above example is universal and does not merely apply to stocks.
Please refer to the report on Intangible Costs: A Businesses’ Greatest Costs are Intangible in Nature https://internationalcapitalmarkets.org/2019/09/07/a-businesses-greatest-costs-are-intangible/
The basic underpinnings of the economic model are such that businesses start off small as a simple idea and, if successful, grow to typically become larger, incorporated businesses that are either private or public in nature with the residual benefit accruing to the owners (not necessarily the founders.) Moreover, at various stages of development when seed or growth capital is needed in excess of a company’s revenue, the option exists to attract capital via the private or public markets (most deals are done in the private markets ). Yet, it is the aforementioned structure in and of itself that can stifle job creation and innovation.
As previously stated, the underlying structure is not always beneficial to founders of a business as the founders 1) Exchange their ownership for money to investors who manage risk; 2) The money from investors is utilized to pay vendors who provide services with little relative upside to the founders because ideologies are not aligned and/or 3) Investors often impose their mandates (whom typically have unrealistic, too many, or competing expectations) onto the founders resulting in lofty forecasts to achieve ROI which then in turn leads to unsustainability. Of note, many VC’s do not earn their cost of capital, relying on unicorns to keep them afloat, while some companies will purchase a company with the sole purpose of mitigating competition. There are also Government Secrecy Orders on patents. Yet, collectively, these factors can hamper innovation.
This is particularly important because businesses are 1) Not only the backbone of any economy but 2) Technically innovations in and of themselves via the emergent properties of markets. While undue focus is placed on large corporations as job creators this is inherently untrue as labor is viewed as an invariable cost in a competitive, global, knowledge, and service-based economy.
More importantly, when there is not any incentive for founders to create businesses due to vendors and/or investors receiving a greater proportion of the economic pie, this bottleneck negatively impacts innovation and business/job creation. The net/net between small business bankruptcies and new small business licenses/creation is, to a degree, the barometer of the limitations that exist within the current structure.
Nature is an extraordinarily great teacher to learn from. However, all to often, it seems humans view themselves as separate and distinct from nature rather than realize we are a part of nature, causing us to not discern its lessons. There are many models from nature and biology that can be applied to finance, although I am simply focusing on self-regulating properties and ecosystems.
Nature has many self-regulating properties that restore balance via negative reinforcing factors similar to purely free-market systems (should they exist). For example, biological diversity within an ecosystem is a sign of health and vitality. Moreover, biological diversity within an ecosystem in many instances leads to a self-regulating system that oscillates towards balance, equilibrium, and stability. Of note, such a state equilibria does not mean it is Gaussian though. Similarly, diversity in business via different cultures, experiences, and ways of thinking creates a degree of stability and equilibrium that oscillates towards the most optimal outcomes.
Another unique aspect of biological diversity within an ecosystem that leads to self-regulation is the importance of Apex predators. As a caveat, within the context of an ecosystem and biological diversity, balance does not necessarily mean equal in number though. For example, typically, Apex predators are outnumbered by their prey. Even so, Apex predators ensure that there is not an overpopulation of their prey and this iteration continues down the hierarchy leading to a sustainable ecosystem. In fact, an indicator of the health and vitality of an ecosystem can be measured by the health of its Apex predators.
This might suggest that the health and vitality of the top 1% in society ensures the sustainability of the 99%, however, that is not the proper conclusion to draw from this example because the dynamics are not the same. Rather, what is important is the co-dependency between Apex predators and lower iterations. If Apex predators die off, the lower iterations will exhaust their food supply leading to collapse. Likewise, if the lower iterations die off, so will the Apex predators. The same dynamic exists between the top 1% and bottom 99% of society. The benefit of viewing financial and economic relationships as an ecosystem are the holistic and dynamic frameworks as opposed to linearity. Speaking of Apex…
Humans are the Apex predators on the planet (besides bacteria). And yet, while nature has its own self-regulating mechanisms, many human systems are driven by unsustainable, positive self-reinforcing factors that ultimately implode on their own accord. These dynamics are similar to placing a candle in a self-contained glass jar. Eventually, the candle burns out once the oxygen is consumed. From an ecosystem perspective, the positive self-reinforcing human systems are in many ways at odds with the self-regulating negative reinforcing factors in nature. The degree of environmental degradation by humans requires a complete shift in the social narrative along with incentive structures to focus on sustainability and restoration.
The primary constant in life is change which is why the following quote from Civilizations Past and Present is important: “In the struggle for survival, the fittest win out at the expense of their rivals because they succeed in adapting themselves best to their environment.”
When I think about the title “A Businesses’ Greatest Costs are Intangible in Nature,” I also think about Albert Einstein’s saying, “Not Everything that Counts can be Counted and Not Everything that can be Counted Counts.”
Those things that cannot be counted are the intangibles. Hence, it is difficult to quantify intangible costs as many intangibles are qualitative in nature. Yet, intangible costs can actually be quantified via opportunity costs. Here is an example I sent to a consultant who specifically works with intangible costs:
Everyone seems to focus on explicit costs which are easily identifiable. However, the most important costs are intangible costs which everyone avoids. Moreover, intangible costs drive opportunity costs/path dependency which impact all aspects of an organization.
For example, consider the intangible costs and importance of creating and maintaining relationships/connections or enhancing your reputation. This can translate into either an increase/decrease in both revenue/costs.
Let’s say a company goes to conferences 5 times a year (at $100 a piece) and comparable industry or historical company data indicates, if possible, that establishing a connection/enhancing reputation can increase revenue by $100. If a company plans to attend a conference 5 times next year that is an outlay $500 dollars. Let’s say establishing 10 connections is the target. The intangible cost of 1. not attending and 2. attending without establishing contacts is $1,000 (factoring in sunk costs) or $1,500 (all in costs).
If you could account for every single cost and performance metric in your business would that make your business very successful? Not necessarily. Almost all businesses know their costs and that doesn’t make them successful. Additionally, when businesses massively reduce their costs to a bare minimum you find that they experience a short-term increase in profits, but they actually massively increase their intangible costs.
For example, think of massively reducing costs as a round board attached with 100 pegs holding up another round board. Similar to Jenga, you remove as many pegs as possible while making sure the pegs still support the boards without collapsing.
Eventually what happens is companies get to the point where they have reduced costs to such a degree that there is not any margin for error. This is very similar to increasing the workload on employees but the intangible costs are a 1) Loss in morale; 2) Increases in stress; 3) Exhaustion; 4) Decreases in motivation; 5) An increase in errors; 6) Decreases in personal time; 7) Decreases in productivity and 8) Increases in turnover. As you can see, qualitative factors lead to an increase in intangible costs resulting in an increase in opportunity costs vis-a-vis increased turnover which can be quantified. In other words, the quantitative factors merely explain that which is, acting more as the symptom, rather than the cause.
The above example makes me think of the Solow residual where output is a function of TFP which is driven by changes in Labor, Capital, and Technology. If you push Labor and Capital to its max, the residual output presumes its technological. The only way to shift the productivity curve is through technological advances. Yet, the technological aspect is inherently intangible in nature and driven by human’s creativity, imagination, and ingenuity which is fostered by the proper culture and incentive structure. “The wheel was merely the manifestation of an idea, in other words, technological thought.”
Please refer to the report on Catch-22 Situations: Why Of Course! However, there is a Catch……..22 https://internationalcapitalmarkets.org/2019/09/16/why-of-course-however-there-is-a-catch-22/
There is a unique interplay between the distribution of age cohorts, the investment needs of those age cohorts, the search for Yield, and corresponding institutional confines that limit the investment opportunity set.
In general, a country wants to be in a situation where it has more people working vs. retiring and more births than deaths (no I am not advocating anti-abortion laws). When you have a combination of more people retiring vs. working and more deaths than births it is going to create meaningful strain on the economy, which doesn’t even include the impacts from widening income inequalities which only exacerbates economic strains on the system.
In the United States, the average life expectancy is roughly 80 years old. Slightly less than half (roughly 47%) of the population will be retired/retiring/or dependant over the next 20 years (retiring/retired is roughly 23%).
There are many people that work multiple part-time jobs without any contingent benefits. The working age cohort also has meaningful student debt, relatively high housing costs, and with many more independently geared women in the workforce, household formation rates have trended lower. Against this backdrop, the working population faces reasonable strain and the ability to invest is somewhat limited skewing asset allocation towards those retiring/retired.
I am defining retiring/retired as 60-90 years. Remember, average life expectancy is roughly 80 years old, so there is a 20 year investment horizon for a 60 year old. That being said, investments for this cohort will be more conservative in nature and structured more so with fixed-income securities. Very low interest rates have allowed the stock market to remain at high-levels. However, the combination of a greater allocation towards fixed-income coupled with low-interest rates to support equity markets has caused a search for Yield. This dynamic naturally will force investors further out on the risk spectrum.
It is clear that the demands of the retired/retiring age cohort coupled with market machinations creates bounded constraints. In the search for Yield environment, additional constraints are investor policy statements as well as institutional structures that limit the ability to take on greater risk. As a result, this creates crowded trades, less diversification, and less liquidity given the limitations of asset allocation models. Collectively, this creates far greater downside risk than what is currently anticipated.
A countries’ development is merely a spectrum. That developmental spectrum can follow the developmental stages of other countries or the developing country can leapfrog based on 1. Its understanding of how other countries developed; 2. Its own unique dynamics and/or 3. Technological advances. As countries become more developed, there is less of a need to move to relatively more developed countries to improve standards of living which naturally slows immigration flows and is the built-in mechanism of shared prosperity.
Immigration comes in two forms usually: 1. Moving from relatively less developed to more developed and 2. Forced displacement (similar to Archimedes but not a Eureka moment) via natural disasters/war (in all of its forms). The forced displacement exacerbates natural immigration flows. Absent point 2 and in conjunction with point 1, when other countries become wealthier (people), the natural byproduct is less of a need to immigrate to wealthier countries and less of a need for borders, if at all.
GDP per capita is a poor measure of wealth as it does not take into account the net equity position or the distribution of the net equity position. Moreover, income inequality exacerbates stresses in the economic system. The combination of forced displacement coupled with the stresses from income inequality exacerbate prejudices against immigrants/migrants which is unacceptable. Even in the United States, most people are poor on a net-equity basis. Trickle-down economics seldom works due to misappropriation of funds, whereas trickle-up economics does work, although trickle-up is a misnomer.
At the outset, it is important to understand the fallacy of composition regarding the depreciation of global currencies to gain an illusory export advantage. If a country weakens its currency, it will gain an export advantage, all else equal. However, nothing operates in a vacuum. When other countries depreciate their currencies to gain an export advantage, the advantage becomes obsolete and you are back at square one. The next iteration of global currency depreciation becomes a transient advantage until becoming obsolete and ending back at square one again – the dynamic occurs in each iteration.
That being said, labelling China a currency manipulator today is inherently incorrect as that is 30 years too late. The pscyhological 7CNY per USD creates its own perception of importance due to repetition/conditioning causing China to make sure it’s currency appreciates when approaching that threshold. China would likely let its currency float when the balance of payments are stable.
The point is, the combination of 1) The 7CNY per USD threshold, 2) Tariffs, 3) Investors jawboning who have an investment in CNY depreciation, and 4) Increased fear of China collapsing due to the 7CNY per USD threshold and tariffs cause depreciation in the Yuan in and of itself via capital flight. However, to label China a currency manipulator while the aforementioned actions of the United States causes depreciation is a tautology and double-standard.
Lastly, increasing tariffs on countries whom the United States imports from increases costs to consumers, businesses, or both. This also causes those countries facing tariffs not to borrow in USD because the increase in inflation in the United States will cause interest rates to increase leading to a stronger dollar, decline in exports, and more expensive USD debt for foreign countries. Conversely, the tariffs on China impacts exports in the short-term while accelerating the Lewis Turning Point, but the depreciation in the Yuan will ultimately lead to an increase in exports.
Thinking in terms of systems and system dynamics is a different way of thinking and perceiving the world. In many ways it is a paradigm shift from the traditional linear, cause and effect, sequential forms of thinking.
Systems thinking focuses on the whole, the behavior of the whole under certain sets of conditions, and the interconnectdness and behavior of the parts to the whole under certain sets of conditions as well. Systems thinking focuses on the underlying structure that underpins causality, outcomes, and feedback loops. It is both quantitative and qualitative in nature and requires synthetic as well as critical thinking.
When the world is viewed from a systems lens, it allows for greater perspective, clarity, and objectivity while everything else fades from view. This objectivity allows one to view the system much like a scientist would conduct an experiment or an engineer would design a bridge. Moreover, one of the most important factors is that systems thinking forces one not to blame, rather it forces one to to think in terms of accountability.
This is important because blame and accountability are not the same thing. Accountability realizes that errors will be made but it focuses on responsibility for one’s actions and to find constructive solutions, whereas blaming is used for shaming. This important distinction transcends all organizations and cultures.
A culture of blame is extraordinarily toxic. But most important is that blaming within a culture creates fear and this fear stifles creativity, innovation, information flows and the ability to take risks. Since innovation depends on risk taking, blaming ultimately hinders productivity. This is why it is imperative to create a foundation and culture that fosters continuous learning and advancement that is underpinned by accountability as this manifests self-actualization and transcendence.
As an aside, there are many instances whereby the system devised needs to be held accountable and changed rather than an individual agent within the system. However, the agents are accountable for allowing a system to be devised or allowed to persist without changing it when it is in their capacity to do so.
There is the financial economy and the real economy. The two are typically tethered together, however, as a result of the Covid-19 induced recession, it is apparent that there is a disconnect between the real and financial economies given the disparity in GDP, unemployment, SME/Corporate bankruptcies on the one hand, and rising asset prices on the other. As a result, many speculate that the rise in asset prices alone will lead to rampant inflation in goods and services as a result.
Please subscribe and email me for the passwords for the following analyses/essays/models:
How the Economy Works, New Credit Creation, Borrowing vs. New Credit Creation, Velocity of Money, Real vs. Financial Economy, & The Catch-22 of Leveraging/De-Leveraging https://internationalcapitalmarkets.org/2019/10/23/%f0%9f%94%bahow-the-economy-works-new-credit-creation-borrowing-vs-new-credit-creation-velocity-of-money-real-vs-financial-economy-the-catch-22-of-leveraging-de-leveraging/ via @Diamond1_CEO
New Model: The Dynamics of and a 11 Factor Model for Inflation https://internationalcapitalmarkets.org/2019/09/09/the-dynamics-of-and-a-model-for-inflation/ via @Diamond1_CEO
Interplay and Limitations Between Age Cohorts, Investment Needs, Search for Yield, and Institutional Confines https://internationalcapitalmarkets.org/2019/09/05/interplay-and-limitations-between-age-cohorts-investment-needs-search-for-yield-and-institutional-confines/ via @Diamond1_CEO
Implications of Global Protectionism, Why Negative Yielding Bonds Will Never Increase Spending https://internationalcapitalmarkets.org/2019/08/20/implications-of-global-protectionism-why-negative-yielding-bonds-will-never-increase-spending/ via @Diamond1_CEO
New Model: Calculating Minimum Wage and Poverty Line https://internationalcapitalmarkets.org/2020/08/07/a-new-model-for-calculating-minimum-wage/ via @Diamond1_CEO
For simplicity, credit that is created flows into the real economy leading to GDP which then circulates in the economy based on the velocity of money. The degree of circulation or velocity of money is fairly stable, notwithstanding an increase in the money supply for non-GDP activity which leads to asset price inflation. As an aside, a disconnect between asset price inflation and GDP typically portends a crash moreso than inflation.
That being said, a portion of the money that circulates is saved which goes into the financial economy. As the real economy grows, so does the financial economy. This is a process that works from the inside-out.
When assets are inflated, this is an outside-in approach. However, the manner in which the inside-out process works is not necessarily the same as the outside-in approach. Additionally, the distribution of wealth and the marginal propensity to consume need to be taken into consideration when assets are inflated.
As a result, the only two ways for rampant inflation to occur when assets are inflated is 1) Borrow against the collateral or 2) Sell-down the assets. Proposition 1 would lead to inflation first because proposition 2 leads to a collapse in asset prices. Given the above, it is unlikely asset inflation alone will lead to rampant inflation for goods and services.
The Capital Asset Pricing Model (CAPM) tries to price the risk of an asset, particularly for stocks, and simultaneously becomes the expected rate of return, required rate of return, hurdle rate, and the discount rate. In other words, an investor would not invest in the asset unless the expected return on the asset was at-or-above the hurdle rate as specified by CAPM.
CAPM prices the opportunity cost of the risk free rate and expected market return coupled with relative volatility. Breaking it down further, the ultimate risk being priced is the relative volatility because it is assumed that idiosyncratic risks can be diversified away leaving only systemic risks.
The underlying assumptions are that relative volatility is the only risk that matters, markets are informationally efficient, and in order to generate a higher return you have to take on more risk. Yet, business risk and relative volatility are not the same. Moreover, to achieve a greater return, you have to take on greater uncertainty, which implies more risk, but not necessarily so if the uncertainty/risk/reward profile is favorable. A better measure of risk is the relative implied expectations of an asset(s).
Please refer to the following analysis: 🔺 Risk, Uncertainty, and Managing Uncertainty https://internationalcapitalmarkets.org/2020/08/21/%f0%9f%94%ba-risk-uncertainty-and-managing-uncertainty/ via @Diamond1_CEO
Risk and uncertainty are often used interchangeably for as the cone of uncertainty increases the further out into the future we proceed, so does the probability of a negative expected outcome. However, risk is merely a subset of uncertainty for uncertainty includes both positive and negative outcomes.
That being said, many of the risk models that are utilized employ a range of assumptions to create stationarity whereby statistical and Bayesian inferences are applied. Yet, in many instances, the assumptions are intractable while non-stationarity reigns supreme in our complex adaptive system. The models are typically not recording events that are 20 standard deviations away or greater, rather they are unknown, unknowns or known, unknowns which could not be captured by the underlying specifities of the models themselves.
When dealing with uncertainty, risk management is paramount, specifically the management of downside risks. In managing uncertainty, it is important to build resiliency, robustness, and optionality within the contingency/scenario planning stages. Additionally, it is beneficial to “fit models to problems, not problems into a model.”
As a simple example and thought experiment, when dealing with uncertainty, it is important to understand the present situation as thoroughly as possible. From there, look for similar references to form a base case of the future downside expectation. From there, create a decision tree matrix of additional potential downside outcomes. As new information is provided, you compare this with the fit of the base case model to see where similarities and differences exist. As more information is received, it is compared to the base case to find similarities and differences. Greater divergences will lead to the probability that an alternative downside outcome exists which needs to be checked against the potential negative outcomes and the possibility for unknown, unknowns as new information is received. Conversely, similarities to the base case leads to refinement of the base case expectations. The iterations continue indefinitely and the process is refined. Ultimately, more buffers need to be created with greater uncertainty, greater risk, decreases in transparency, and increases in entropy.
Please refer to the following analyses: Part 1: Algorithms, Machine Learning, and Future of Predictive Analytics https://internationalcapitalmarkets.org/2020/08/19/%f0%9f%94%ba-part-1-algorithms-machine-learning-and-future-of-predictive-analytics/ via @Diamond1_CEO
Why of Course! However, there is a Catch……..22 https://internationalcapitalmarkets.org/2019/09/16/why-of-course-however-there-is-a-catch-22/ via @Diamond1_CEO
There are roughly 22 million confirmed Covid-19 cases globally, 13 million have recovered, and 800,000 have died. As a percentage of the total population, the number of confirmed Covid-19 cases is only 0.2% of the global population. In percentage terms this seems inconsequential and not a cause for concern on the surface. In fact, by itself, it may even seem that governments are overreacting. However, the event is not isolated and is certainly a pandemic considering how widespread infections have become. Moreover, pandemics follow a typical S-curve pattern and grow non-linearly for a period of time – this is what you do not want to wait and see happen. Some of the bigger questions facing governments are how many deaths are too many and how to balance Covid-19 with economic growth?
While there is not a right answer, any rational government would consider one death as too many in any pandemic scenario. The more the infection spreads, the more deaths that occur, the less credible the government becomes, especially if the government tries to manipulate numbers, reduce testing availability, or pretend it doesn’t exist. Taken to an extreme, a pandemic has the ability to overthrow the government if not managed properly. A pandemic creates an inherent catch-22 for the government because the implementation of masks and social distancing causes many to feel as though their civil liberties are being violated while fueling conspiracy theorists, yet, doing nothing and letting infections and deaths increase will lead to outrage and reduced credibility and trust of the government itself, although people will then realize the pandemic is not a hoax at which point it is too late. Looking at the intangible costs, the government has but no choice to mandate lockdowns, the wearing of masks, and social distancing as a preventative measure, lest it risk its own insolvency and economic collapse.
That being said, suppose the United States decided that everyone should get infected, build antibodies, and then we will be fine as a result of heard immunity because our body created its own vaccine. There are roughly 6 million confirmed cases, and 200,000 deaths against a population of 330 million. If you assume the death rate increases from 0.0006%-to-0.005% to achieve heard immunity, then roughly 1.6 million people would die. Clearly, that would never happen because action would be taken by then.
The other alternatives include 1. Lockdowns/mask mandates/social distancing, 2. Mask mandates/social distancing, or 3. Optional mask mandates/social distancing. The countries that have succeeded so far in keeping transmission rates low followed steps 1 and 2 while the United States has followed something similar to option 3. Additionally, those countries that have followed steps 1 and 2 have seen a rebound in their economies as a result of sound monetary and fiscal policies coupled with strong social safety nets.
Ultimately, preventative measures create a heard immunity while also buying time to develop an efficacious vaccine or cure. Moreover, if the sole purpose is to profit from a cure or vaccine then the incentive structure could become perverted and induce the continuation of being sick which is counterproductive and a country may follow something similar to option 3 or even lesser as a result because more sick people means more profits, which is inherently unethical and nonsensical.
There is a general misunderstanding among large swaths of the populace who believe that people receive unemployment benefits that are greater than what they earn if they were working which incentivizes further unemployment. By design, unemployment benefits are a part of the social safety net and do not incentivize further unemployment.
For starters, without looking at the dynamics of the unemployment benefit system, just realizing that the current model for prosperity requires continuous growth indicates that nobody would devise a policy whereby it is incentivized to not grow. Therefore, it follows that the unemployment benefit scheme is structured in such a way so as to provide relief temporarily while not stifling growth indefinitely. Let’s take a closer look….
Unemployment benefits are paid from payroll taxes. The benefit is typically for 26 weeks and is usually half of what was earned with caps in place. Many states require recipients to send in a minimum level of job applications to continue to receive benefits as well. This mechanism does not incentivize unemployment by any means.
However, as a result of COVID and the CARES Act, people were receiving more in benefits compared with when they were working. This is because it was an emergency measure that was applied unilaterally and many workers who lost their jobs were in lower-end paying jobs such as janitorial services, etc. Additionally, the States’ computer systems were overwhelmed by the demand for unemployment benefits. Also, in some instances, they (beneficiaries) might be making more staying at home as compared with an essential worker, which is why many have suggested hazard pay for essential workers. Despite receiving more in unemployment benefits compared with when they were working, there is not any indication this has increased unemployment. In fact, the number of unemployed is still greater than the number of job openings.
As the initial shock from unemployment has leveled-off somewhat, a more nuanced and targeted unemployment package can be created that is more in-line with previous earnings to make citizens whole given the circumstances. The mechanism would revert back to its original systematic design once conditions normalize.
In the context of earning a wage for a living, the ability to increase one’s wages is a function of 1. Career Advancement, 2. Horizontal Job Moves, 3. Vertical Job Moves and/or 4. Seniority/Loyalty Promotions.
Ideally and in theory, an individual will advance as far as their Skillset will take them until they rise to their greatest incompetence. It is at this juncture that all have risen to their greatest incompetency whereby as a whole and on average the incompetencies cancel out when diversity exists. However, there are also frictions to simply advancing infinitely to your greatest incompetency such as market forces and hierarchical structures. Such frictions tend to lead to horizontal job moves and/or vertical job moves.
A horizontal job move is a function of switching jobs to a new employer for the same role. The new employer will typically offer a wage that is higher than your previous wage for your services unless in the negotiating process you say exactly what you were paid before, your position is becoming obsolete, or you were paid higher than market rates dictate and the new employer is at parity with market rates.
A vertical job move leads to a wage increase because the default assumption is the vertical job move is also career advancing at which point one would receive the prevailing rate in the market unless negotiated higher.
The other alternative is remaining employed at a single company until retirement and watching your wages increase based on a range of factors including seniority, loyalty, company growth, and market forces. Market forces inherently dictate there is a limit to one’s wage increases. Additionally, there is a limitation to joining a company until retirement considering the pace at which disruption now occurs.
While these are the viable means to increase one’s wages, there is a threshold where a dynamic minimum wage level needs to be set. If left to its own devices, the private sector will set wages at a level that maximizes its profits at the lowest level above insolvency and below the required minimum wage threshold as the burden of responsibility to pay a wage above the poverty line now rests on the government to fill in the gaps via entitlements. Yet, in setting a market rate below the required minimum wage and poverty line, the private sector is also against entitlement programs because it requires higher taxes on the private sector, individuals, or both – an inherent catch 22. Additionally, setting a rate below the required minimum wage and poverty line reduces savings given the dynamic inflation composite causing more households to become dependent on Social Security in later years, another entitlement program.
The current poverty line calculation utilizes pre-tax cash income against a threshold that is set at 3x that of a minimum food diet in 1963!
The minimum wage model needs to be set according to the average, basic, lowest cost structure for a household(s) and adjusted for that inflation composite. This threshold becomes the poverty line and adjusts dynamically. This threshold is then compared across the distribution of wages in the United States to see where gaps exist between the minimum wage and market rates as this identifies where entitlements flow to the most and where policies need to be devised. This places the burden on the private sector. In turn, the private sector will argue this will make businesses less competitive in the global market. This is not so when applied on a uniform basis. This forces the entire system to innovate and create value.
The property management industry is complex, but in simple terms you want to increase rents where possible, increase lease renewals, maintain high occupancy levels, be highly efficient, build relationships, provide exceptional customer service, and ruthlessly keep costs down.
There are roughly 300,000 property management companies in the United States. The addressable market size is nearly $90 billion with no major players with a market share of greater than 5%. The industry is highly competitive making relationships and customer service invaluable intangible costs. Typical services provided by property managers include rent/fee collection, leasing, maintenance, property inspections, marketing vacancies, evictions, cleaning, purchase/selling properties, etc.
In general, demand tends to be countercyclical in nature. During an economic downturn, demand for residential rentals increases whereas demand softens during an upswing, but commercial properties become more important during this phase of the economic cycle.
Demand for dwellings is driven by the household formation rate which is underpinned by population growth, job/income growth, and demographics. There are roughly 325 million people living in the US in 118 million dwellings. This is further broken down between 36% renters and 64% owners. Of the 36% renters, roughly 37% are single-family, 42% are apartments, and 21% categorized as other. In spite of these trends, rental demand is expected to be strong as a result of the affordable housing shortage and the population of ages 35 and under growing faster than the general population.
Some of the factors leading to the affordable housing shortage include high student loans (average $37,000), stringent credit requirements, rent outpacing wage growth, high home prices, and young adults delaying getting married or having children. This has led to people renting later in life coupled with a more diverse group of renters. As a result, a one-size fits all approach is unlikely to work, whereas a more personalized and customizable experience is preferred. In turn, this has caused many property managers to invest in technology to meet these needs.
A recent trend that has occurred is portfolio shrinkage for property managers. This was due to newly built rentals flooding the markets, high home values causing accidental rental landlords to sell their properties, and investors slowing their pace of acquisitions. These factors have caused larger property managers to acquire smaller firms.
An innovative culture allows an enterprise to be more anticipatory as opposed to reactive in nature. Being anticipatory is a competitive advantage against the backdrop of faster change and disruption occurring around us as a result of computing power, bandwith, and digital storage. Amid increased change and uncertainty it is no longer enough to 1. simply be agile and 2. benchmark as you will always be a step behind.
An innovative culture also improves the accuracy of anticipating the future, allows decisions to be made with greater confidence, drives exponential, disruptive innovation with lower risk, and increases trust within an organization. This empowers an organization to shape and influence the world around it from the inside out rather than react from the outside in.
The ability to anticipate and shape the future requires the ability to distingiuish between hard and soft trends. Hard trends are based on facts and are a certainty. Soft trends can be based on hard or soft assumptions. Hard assumptions are based on facts but have the ability to change whereas soft assumptions are opinions based on facts. Planning based on hard trends and soft trends based on hard assumptions allow an organization to anticipate and shape the future. This also forces the organization to have a view towards the future rather than rearview mirror thinking. This is all the more important because how you view the future impacts how you act in the present which shapes your future.
It is estimated that it costs roughly $80,000 to operate a business in its first year. That being said, most entrepreneurs obtain their initial funding from self-funds, and/or family and friends which is typically $10,000 or less. Considering the initial upfront costs coupled with expectations for growth or cash shortfalls, entrepreneurs will need to obtain outside financing at some point.
Capital providers such as angel investors, venture capitalists, family offices, and accelerators/incubators review thousands of proposals. Of those proposals, only a handful receive funding. And of those that receive funding, a few of those ventures are profitable and provide an ROI. Against this backdrop, the odds of receiving funding are not stacked in the entrepreneur’s favor at the outset. However, the goal when obtaining funding is to strategically stack the deck in the entrepreneur’s favor.
This requires entrepreneur’s to first think about the totality of their business and not simply in terms of providing a great idea for a product or service. The entrepreneur needs to view the providers of funds as they would a customer and tailor their business pitch accordingly while focusing on 1. Investor’s objectives and 2. The exit plan for the start-up.
Investors typically find deals with people they know, like, and trust which is why it is critical to network. Additionally, investors look at the charactertics of the founder and qualities of the team more so than they do the economics of the business because it is the team that has to navigate the array of challenges and likely pivots going forward. It is highly important to be coachable (not defensive) because the providers of capital value their time, resources, and knowledge more so than their money. Even with the best business plan and pitch deck there will be many “no’s” which is why it is important to cast a wide net.
Lastly, often overlooked, it is critical to focus on managing downside risk. Reducing risks inherently increases the value of your business and makes investors feel more secure in funding your start-up because they understand how to recoup their investment should it fail.
For simplicity, venture capitalists invest in the future. They invest their money, time, and resources in future management, technology and ideas with the hopes of generating a return for their investors and themselves upon a liquidity event.
Venture capitalists are able to reduce risks by pooling their capital and resources with other venture capitalists. The required returns are particularly high because investments are being made in ventures that are rather risky as these are companies, management teams, products, ideas, and technologies that are still in their formative stages against the backdrop of ever present change and competition.
When considering the investors’ required returns, fees, and venture capitalists’ required returns, investment opportunities have to yield greater than 25%-30% to become attractive. This means the venture capitalists’ expectations decide if the investment is worthwhile. This also means a great many products, ideas, and technologies are forgone because the expected returns are simply not high enough, not because they are not potentially profitable, viable future businesses.
This creates a bottleneck to future innovation and growth. There are only so many new ventures that can yield greater than 25-30% returns. Venture capitalists returns are usually skewed. For example, out of a portfolio of 10 ventures, 7 may fail while 3 succeed. This is a loss-to-success ratio of 70%/30%. However, the returns on the successes can be great enough to offset the losses and this is what attracts venture capitalists – frequency vs. magnitude.
With such constraints, venture capitalists can in fact overpay for these ventures when considering the price paid-to-equity ratio. This can cause venture capitalists to all target the same opportunities leading to not only overpaying but also a myopic focus on unicorn companies. There are only so many unicorn companies that come to fruition whereby an investment strategy based solely on unicorns is not very stable.
Many products have a UPC bar code that can be scanned into a computer database which is regularly updated to reflect price changes.
Most stores require scanning every product first and feeding this data into a database which takes into account inventory levels to then update prices. From there, a rolodex of pricing labels are created which then need to be manually placed under the products on the shelves. This is inherently tedious and inefficient.
I have a Prototype for a UPC/Pricing Label Scanner and Printing Gun which has its own database of products, inventory levels, and prices built into it. Simply scan the UPC and it will create a label which can be applied via an adhesive backing.
Private equity (PE) has been an attractive investment class stemming from strong returns and diversification. For simplicity, private equity buys businesses, grows businesses, and sells businesses. The difference between the price paid and sold for the businesses is how private equity makes money and generates a return.
Private equity typically takes a controlling interest in a company via a mix of debt and equity while creating a range of incentives to align the acquired business interests with its own. The focus is to grow EBITDA which can be achieved via 1. Organic growth, 2. Increasing margins, or 3. Buy-and-build.
As with any investment, private equity is driven by expectations as to how much and how soon the acquired company will grow. Deviations between expectations and actual results impacts the internal rate of return (IRR) and Multiple on Invested Capital (MOIC). This is why due diligence is imperative to assess expectations which drives the price paid, expected exit value, and expected IRR.
Overpaying can result from too lofty of expectations, poor performance, or too much money chasing too few deals. A lower exit multiple can result from lower growth expectations and margins or declining equity markets all of which impacts the IRR. Many private equity portfolios are skewed with more losing investments than winners, although this can be offset with the winning investments generating higher returns – frequency vs. magnitude. However, such a dynamic can cause private equity to only seek out “unicorn” investments which are few and far between.
Perspective is very important. However, if you ever studied fractal geometry, you realize quite quickly that perspective at times is one and the same. Fractal geometry shows that at various scales (big and small), you can’t tell the difference because the dynamics are the same. This is unique because I typically prefer a top-down perspective versus bottoms-up as a result of the fallacy of composition. But, fractal geometry offers a different perspective on that composition which I find fascinating.
When comparing the fallacy of composition and fractal geometry, it’s important to not make the two compete as they are distinct and offer their own unique perspectives. The primary reason I prefer top-down analyses versus bottoms-up is because what happens on a micro scale is not always best on a macro scale. Granted, that micro scale can always become a macro scale if permitted under various conditions.
Given the above, I was thinking about the current situation facing the World Trade Organization (WTO). On a micro level scale, arbiters make sure the playing field is fair and equitable. The United States tries to make sure that companies do not have monopoly status whereby they can continuously overcharge customers, etc. On a larger scale, the World Trade Organization (WTO) is an arbiter that makes sure that trade is fair and equitable among countries that meet the guidlines within the WTO. In many ways the scaling from arbiter of what’s fair and equitable, to the United States, to the World Trade Organization is fractal in nature.
As globalization, trade, and interconnectedness has increased, naturally a larger body has formed to make sure global trade is fair and equitable among all parties involved. Should the WTO no longer exist, each country will have to dispute claims amongst themselves directly. This will be more difficult in an environment of increased nationalization, xenophobia, and protectionism. Absent the WTO and in an environment of pure protectionism among all countries, trade will grind to a halt or conflicts will likely increase without an arbiter to make sure trade is fair and equitable.
Companies pursue Mergers and Acquisitions (M&A) for many reasons with the oft quoted rationale being synergies. Therein lies the problem. There are only so many reasons to pursue M&A and those reasons have to create actual synergies, not merely being cloaked within the term synergy.Continue reading “🔺Part 1: Vertical, Horizontal, Complimentary, Geographic, Strategic and Growth Mergers and Acquisitions (M&A) & Synergy vs. Fake Synergy”
The definition of a democracy (noun) is: A system of government by the whole population or all of the eligible members of a state typically through elected representatives.
Despite the defintion and constant repitition of the phrase democracy, it is important to be distinct and to state what type of democracy is being advocated as there are 8 general types of democracy: 1. Direct, 2. Representative, 3. Presidential, 4. Parliamentary, 5. Authoritarian, 6. Participatory, 7. Islamic and 8. Social.
In many instances the few control the many, yet, regardless of the form of governance (8 Democracies, Monarchies, Authoritarianism, and No Governance), the many always control the few because all governance systems require the greatest good for the greatest number (Utilitarian Principle), otherwise the few will be overthrown.
That being said, it is important to distnguish between a Democracy and Capitalism. Democracy is a system of government while capitalism is an economic system and political ideology.
The definition of capitalism (noun) is: An economic system and political ideology in which a country’s trade and industry are controlled by private owners for profit, rather than by the state.
Democracy and Capitalism are not the same. However, when the primary political ideology is Capitalism, then Democracy as a system of government is usurped by the political ideology of Capitalism. There is no longer a Democracy that practices Capitalism at this juncture, rather, it simply becomes Capitalism and Capitalism as a political ideology becomes the mainstay of the political system. Yet, this can be problematic based on the below excerpts from my recent analyses/essay:
🔺 “Caveat Emptor” Lacks Accountability and Serves No Purpose https://internationalcapitalmarkets.org/2019/10/17/%f0%9f%94%ba-caveat-emptor-lacks-accountability-and-serves-no-purpose/
“If left to its own devices, the private sector can act in such a way so as to only benefit itself at the expense of others creating A Tragedy of the Commons – examples abound. Moreover, it is in the inherent interest of the private sector to advocate for laws, policies, and regulations that benefit its own self interest if left unchecked as this invariably leads to greater profitability via gaming the system to its advantage via regulatory capture. However, such actions in the aggregate when viewed from a Fallacy of Composition perspective reveals this creates negative externalities and a deadweight loss on society that is absorbed by everyone collectively.”
“For starters, there is not any reason that Caveat Emptor should even exist in the marketplace. Why should a buyer beware of the products sold, rather, it should be Cave Operante or Producer Beware as it is the consumers who vote with their currency. The notion of Caveat Emptor immediately places the burden on the consumer of the product and/or service thereby reducing accountability of the producer. It would seem that the marketplace via its self regulating properties would immediately eliminate those companies that pose the greatest risks to consumers of the product/service, however, this is not necessarily true. For if this was true, then laws, policies, regulations and lawyers to protect consumers would not exist because companies that cause serious harm would immediately go out of business.”
Eventually, the regulatory capture can lead to information and political capture where corporations et. al. can spend money on political campaigns which further ensures their interests and prejudices while the political appointees are indebted to their donors’ money, views, and prejudices – including gerrymandering and voter suppression efforts. Or, because the private sector holds the power and information asymmetries, a revolving door between the public and private sector occurs which further ensures the private sector’s interests. To combat the strength of the private sector, the public sector may grow even larger leading to concerns of big government which plays directly into the private sectors hands because the areas of government they would advocate are too big and need to be reduced are those aspects that maintain the proper checks and balances to begin with.
Therefore, on a standalone basis, it seems to me Capitalism wins vs. Democracy, but Capitalism ultimately fails via increased inequality as well as regulatory, information, and political capture. Maybe then through Joseph Schumpeter’s creative destruction is one-of-the-eight forms of democracy restored.
Then again, regardless of form, any governance system requires the greatest good for the greatest number.
If left to its own devices, the private sector can act in such a way so as to only benefit itself at the expense of others creating A Tragedy of the Commons – examples abound. Moreover, it is in the inherent interest of the private sector to advocate for laws, policies, and regulations that benefit its own self interest if left unchecked as this invariably leads to greater profitability via gaming the system to its advantage via regulatory capture. However, such actions in the aggregate when viewed from a Fallacy of Composition perspective reveals this creates negative externalities and spill over effects resulting in a deadweight loss on society that is absorbed by everyone collectively. This brings me to the notion of Caveat Emptor or Buyer Beware.
For starters, there is not any reason that Caveat Emptor should even exist in the marketplace. Why should a buyer beware of the products sold, rather, it should be Cave Operante or Producer Beware as it is the consumers who vote with their currency. The notion of Caveat Emptor immediately places the burden on the consumer of the product and/or service thereby reducing accountability of the producer. It would seem that the marketplace via its self regulating properties would immediately eliminate those companies that pose the greatest risks to consumers of the product/service, however, this is not necessarily true. For if this was true, then laws, policies, regulations and lawyers to protect consumers would not exist because companies that cause serious harm would immediately go out of business.
The alternative to Caveat Emptor is placing labels and educating the consumer on the harm and risks that a particular product or service can cause. However, in so doing, the accountability has still shifted from the producer to the consumer of the product again. Moreover, the labels, education, and marketing of the product/service presumes that all relevant pieces of information are put forward in good faith for the consumer to make a valid assessment, which is also not necessarily true.
The ability to place labels regarding the harm a product or service can cause creates a “well you should have known better” argument for producers against consumers, and still obviates producers from accountability as a result. Rather than fall back on Caveat Emptor or labels, products/services that can cause serious harm should not even be able to enter the marketplace as it inherently does not serve any purpose. However, in so doing, others will argue that such an action is a threat to freedom of choice. Yet, no consumer, from a mass consumption perspective, should be choosing between product/service Serious Harm 1 and product/service Serious Harm 2, irrespective of labels.
The scientific method is a process for solving problems. The typical scientific method is a 7-step iterative process consisting of: 1. Making an Observation, 2. Asking a Question, 3. Doing Background Research, 4. Forming a Hypothesis, 5. Conducting an Experiment, 6. Analyzing Results and Drawing a Conclusion and 7. Reporting/Sharing Results. If the results are different than the hypothesis, a new hypothesis is created and the iterative process begins again. Does the scientific method have shortcomings, sure, particularly when variables interacting create a new variable for which the control cannot be tested. Even so, beyond being a process, the scientific method ultimately shows that there is no such thing as failure, rather, failure is simply a feedback mechanism and can be likened in many ways to bumper bowling.
When viewed this way, failure is merely part of a process that refines and leads to greater understanding. The same is also true for success. Taken a step further, success and failure do not exist, rather, they are merely feedback mechanisms within an iterative process. Why is important to view success and failure as not existing and rather objectively as part of an iterative process?
Both success and failure have physiological and psychological impacts. Success can cause a lack of reflection, refinement, and improvement. Internalizing failure can cause depression, low self-esteem, and the inability to trust your own judgement. An objective approach that views success and failure as merely feedback can mitigate, to a degree, the emotional, psychological, and physiological aspects that success and failure bring. Additionally, it fosters the notion of process over outcome because ultimately the correct process, systems, and principles win out in the long run despite aberrations.
The goal is not to eliminate emotions, but rather to limit the negative aspects that success and failure can have cognitively, pyschologically, and physiologically so continued improvement, refinement, and understanding can occur.
There is a caveat as stupidity does exist. For example, if you put gasoline in your house, lit a match, and watched your house burn down for no apparent reason, or eat Tide Pods, that is stupidity.
All investors make investments with the underlying expectation to make money, otherwise, they would not make the investment in the first place. This is precisely why there should not be any delineation in style between value and growth investing as both styles fundamentally are being made with the expectation to make money.
Quite simply, 1). Growth stocks arbitrage actual and implied growth rates and 2). Value stocks arbitrage actual and implied intrinsic values. Yet, when intrinsic value is being calculated, growth is inherently a component, which is why both styles (growth and value investing) collapse to arbitrage both the actual versus implied growth rates/intrinsic values. If you take this iteration to its next level, the only thing being arbitraged are actual and implied expectations. This is why I agree with Warren Buffett that value and growth are inherently joined at the hip.
Arbitraging expectations comes in four forms: 1) Arbitraging the fundamentals; 2) Arbitraging human emotion; 3) Arbitraging structure and/or 4) Arbitraging the derivation of expectations. The best solution is to utilize a strategy that arbitrages all of those components. The real key is understanding what is driving the underlying expectations in the first place and how far into the future the markets can actually see (that is my latest model).
For purposes here, I will use the term value investor because it is part of the investment lexicon. A particular concern that I have is blindly following a strategy as defining value stocks as merely those that are “relatively cheap/cheap” (where “cheap” is simply defined as a lower P/E ratio) against the backdrop of persistently low interest rates to prop up markets and the search for Yield, here is why.
The extraordinarily low interest rate policy has been utilized to prop up markets along with aggressive management of expectations to the upside. Remember, the more expectations have to be managed at the expense of the fundamentals, the more downside risk there is as that is a sign of weakness. If markets have been propped up by relatively low interest rates and particular value stocks that were cheap before remain cheap or are even cheaper, then that means they are a bargain and will benefit from mean reversion, right? No! Those are stocks that have fundamental/structural issues and purchasing those companies will result in a permanent loss of capital or dead money via opportunity costs. Also, purchasing those aforementioned value companies with the belief that they are already poorly performing businesses and will benefit to the upside on a relative basis if the market declines because their downside risk is already priced in is nonsensical.
(Note: Originally published 7/8/2018 @https://patreon.com/diamond1)
In a general sense, the larger and more bloated a government becomes from a combination of reckless fiscal spending, unsound monetary policy, and oppressive laws, the more that government will have to increase taxes on citizens, goods/services, other countries, or financial markets.
The United States Tax Cut and Jobs Act (USTCJA) was heralded by republican think tanks as a tax cut for the middle and lower classes and touted as bringing investment into the United States. There were arguments that the tax cuts would confer the same benefits that occurred during the Reagan Era. However, the analysis did not account for changes in underlying business dynamics post the Reagan Era or the size of the budget deficits in the United States.
To recap, the GOP tax cut and jobs act was marketed as a tax cut for the middle-classes or lower and was done intentionally during the holidays as a means to provoke spending in the US via consumer credit cards while juicing numbers during Q1. It was known it would benefit the wealthy and corporations. It was known corporations would 1. Buy-back stock, pay-dividends, sit-on-cash, or partake in M&A while laying-off employees. The bonuses reported in the news were few and far between to cover up these shortcomings. Also, analysts for particular companies in the US indices lowered estimates so there would be earnings surprises while everyone was managing expectations higher. In totality, all of this was done to support the markets. Confidence is supposed to be as good as gold in the markets, but the more you have to rely on managing expectations the weaker the markets and economy actually are. The more expectations are managed, the more credibility is lost if not supported by figures that are not manipulated. The other steps being used to prop up the US markets also include continued usage of margin financing, banks allowed to trade for their own accounts again, and dismantling important aspects of the Dodd-Frank Act.
It is better to look at the total amount of debt in the USA which could include USA government debt, USA corporate debt, USA household debt, USA pension shortfalls, and off-the-books USA military spending to get a true sense of the total debt. You would also want to include the increase in the deficit from the Trump Administrations Tax Cut and Jobs Act (TCJA). For the pension programs in the USA, the required return has been kept at an artificially high-level as a means to keep liabilities low, but this impacts those dependent on the pensions by requiring additional contributions, limiting increases in wages, or requiring additional funding via borrowing, unfortunately. The use of investor proxies is also used a means for reform in other other countries to look more Western in nature which is not always appropriate at all given their stage of development including double standards.
When the USA can no longer tax its citizens any longer it would then have to 1. tax other countries (directly and/or indirectly), 2. sanction other countries, 3. force support for the USD. There are already capital controls in the USA that impacts regular everyday citizens but it is disguised and is actually utilized solely to prevent a run on the bank. Due to all of the above, this is why Trump and the Trump administration are unfortunately cutting all of the programs in the USA and Internationally that have any social value while attacking President Obama. It should be noted that Donald Trump rushed the movement of the US embassy to Jerusalem which led to massive bloodshed. He also authorized Syrian bombings that went against protocol. On the second occasion, Donald Trump did not even wait for an inspection. The release of US prisoners is being utilized as a bargaining chip but also to mitigate the disasters done so far while trying to get Trump to win the Nobel which doesn’t make sense as arms deals are still being supported with other countries internationally. Trump, the Trump Administration, and ultra-conservative players continue to stick to their campaign while making up for any shortfall in as public a manner as possible via various tactics, just look at the timeline.
Despite all of the above, and in a general sense, I do think that a value-added tax system is a better model. As a caveat, the value-added tax system is only as good as to where the underlying taxes are viewed as value-added. If you think lowering tax rates for coal is value-added while increasing taxes for new/renewable energy then that is probably not value-added, however, it might seem value-added but that is because there is a disconnect between the policy and values in the first place.