As part of our search for new managers to include in our database, we scan a wide variety of sources. Most of these managers are hardworking, ambitious, and entrepreneurial individuals and companies with good intentions. However, recently we have come across several managers that may have other intentions. A few of the issues that we have identified are:
- Instant Track Records – unknown organizations with long instant track records and supposedly large assets under management.
- Stable returns from trading strategies (we are talking about Sharpe Ratios >> ten, which is literarily not observed outside Renaissance Technologies and illiquid strategies with managed pnl).
- Generally unlicensed in the countries where they do business.
- Webpages that have the same design, although attractive, they are largely structured in the same manner. They have similar offerings in terms of strategies and description, but returns and names are slightly different.
- What seems to be copy-pasted text pieces across the websites. Most of it is not original content but can be sourced from more reputable firms.
- Low or no prior presence on the internet, the fund managers do not exist on LinkedIn (which is almost unheard of).
- Large and established organization where the majority of the addresses are listed as communal office spaces (e.g. WeWork, Regus, etc.)
- What is more interesting, is that they seem to have used “Deep Fake” photos to generate pictures of their investment staff. There are several apps like this available for free, where the most famous one is: https://thispersondoesnotexist.com/
So far, we have found three different companies (two Australian, one Dubai based) with essentially the same websites, messages, and return characteristics. Each of the sites features the key investment members as Head of Wealth Management, Asset Management, Managed Programs, Trading, Alternative Investments, Chief Compliance Officer (in various jurisdictions), and Software Engineering. Strangely, three organizations would pick the same functions to highlight for their key persons, not to mention the same organizational structure.
However, after further inspection of the provided personal photos, we also believe that something else is going on. A photo is often used to create a more credible impression, but photos can be manipulated and generated. The benefit of using computer-generated photos is that a reverse image search will not yield any results. If they would have used a real photo, it might have become obvious that the persons pictured were not associated with the company, but simply borrowed from another person’s likeness.
For reference, we have provided the pixelated headshots of the key members. Deep Fakes are typically generated by the so-called Generative Adversarial Network (“GAN”). Nvidia has provided a large number of implementations and sample networks. There are a number of features that come with the territory of GAN portraits and we will discuss some of the points below. We will walk through a number of issues that we believe indicate that these photos are computer-generated.
The individual faces tend to look fairly realistic, but as part of the learning process that goes into the training of the neural network, the training data often need to be “normalized”. This is one of the effects behind the feature that eyes, mouths, cheeks, jaws, hairlines, etc., are placed in the same position in most Deep Fakes photos. You can observe some of the features in the blurred picture above, ears and hairlines are almost perfectly aligned.
The same places of the features are evident in the picture above. Compare this to a picture of the first rows from the list of Committee Chairmen of the US Senate. Here, facial features are not aligned. For instance, tilted heads are rarely present in GAN-generated faces. This is an effect that depends that the Networks are usually trained on cropped and straightened pictures as described above. Moreover you have a natural variety of poses and backgrounds. Features that are largely missing from computer generated facial pictures.
An article in the NYT highlighted these features well https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html. Put your finger on a facial feature, start the animation and you will notice it pretty quickly. There are computer programs that can identify photos as likely GAN generated or not. We have not used them here, but rather looked at various other features.
A feature in GAN photos, that is also visible in the pictures above, is the different backgrounds. In some cases, the backgrounds are random, in other cases, they are vivid and almost Salvador Dahlesque with more or less realistic background patterns. We also note some photos in color while others are not. On the websites, all of them are in greyscales.
In the picture above, we can observe that the man has so much gravitas that his presence is actually distorting the background wall. You can see how the right black lines curve around his head. We also note that almost all of the photos have different backgrounds.
To illustrate a few of the artifacts, we have generated two portraits on thisisnotarealperson.com, and highlighted a few of the easy to spot issues. Ears, earrings, cloths are the easiest to spot. You can observe the same salient feature placement: eyes, hairlines, mouths, noses are in the same areas on the two photos. Computer generate hair also tend to be less smooth and more whirly. Once you see them, you cannot unsee them.
While most humans have slightly asymmetric faces, the GAN network typically takes it to extremes. Ears, glasses, and earrings are often distorted or highly different. We illustrate of few of the issues below.
In the first picture, we note that the man’s ears are placed at various heights. In the second picture, the woman is wearing earrings on one of her ears only. In the third, we note that man’s glasses have a white dot on the right side, but not on the left side. The other pictures are rife with such features. These are documented flaws for generated pictures. So, if you are using such pictures, make sure you do not pick people wearing earrings or glasses.
Can it work?
While these types of organizations have a low probability attracting institutional interest, they do have better odds in a world where on-sites visits are rarer and investment managers also have adopted a remote working philosophy. With an aggressive sales force, promising predictable and high return may certainly attract retail clients that cannot afford proper due diligence.
We do not think any readers of this article would fall for the firms and tactics above, but rather wrote this as an attempt to find new, non-obvious places to look for misrepresentation. However, as pointed out in this article (https://www.reuters.com/article/us-cyber-deepfake-activist-idUKKCN24G15E) several other organizations have been fooled by deep fakes.
Stay vigilant out there!
And as usual, the database is updated and ready for your analysis.