Finnish Fund Manager Launches 'Buffett-In-A-Box' A.I.-Based Fund... There's Just One Thing

Tyler Durden's picture

Amid the empty-vessel-driven "deep-learning", "artificial-intelligence", and "algorithmic" narratives-du-jour, more and more fund managers are jumping on the bandwagon. The latest is Finnish fund manager FIM, who is introducing the first investment fund in the Nordic region, where a self-learning algorithm gets to pick all the stocks.

As Bloomberg reports, targeting returns of 3 percentage points above the MSCI World Index, the FIM Artificial Intelligence fund seeks to tease out patterns even an experienced fund manager may not detect, according to Chief Investment Officer Eelis Hein, who oversees 5.6 billion euros ($6.6 billion) in investments at FIM Asset Management.

“This is the next revolution across society, including in investing,” Hein said in an interview in Helsinki.

 

“Investors are hugely interested.”

 

The “Warren in a box” technology, referring to famous value investor Warren Buffett, is a product of more than two years of work by Acatis Investment GmbH and NNaisense SA, a Lugano, Switzerland-based developer of artificial intelligence.

That all sounds very exciting and 'new' and 'tech' and awesome. There's just one thing...

Johnny-5 sucks at stock-picking...

Remember what that AI Fund CEO said... “EquBot AI Technology with Watson has the ability to mimic an army of equity research analysts working around the clock, 365 days a year, while removing human error and bias from the process.”

But hey FIM is not giving up, they drop some more complex words to 'splain away any potential doubts an investor may have...

“AI may detect non-linear patterns that traditional quantitative analysis is not able to identify,” Hein at FIM said.

 

“It has no emotions: it hasn’t felt fear during a crash nor euphoria when markets are up.”

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
zorba THE GREEK's picture

I think we will be seeing Buffet in a box soon enough, but not soon enough for me.

A Sentinel's picture

It would be nice. But this won’t end his career. The phrase “removing human error “ is probably the lamest mis-splaining I’ve seen applied to ai. There are so many more apt things to say....

And THAT shows why these guys won’t rule the universe and box buffet....they are clueless about what it is much less how it works —- they DIDNT WRITE IT!!!! It was machine learning guys, most likely copying some neural network.

Others will come along.

Normalcy Bias's picture

Will the investors in this fund also get a taxpayer funded bailout after the next crash?

In other words, like Buffet, is it also a 'can't-lose' proposition?

Sy Kloine Bee's picture

Can we get Hitler in a box instead?

GETrDun's picture

Unicorn fart technology.

techpriest's picture

Watson is a good AI for what it is built for, but I think people putt too much faith in what it can do.

For those not familiar, Watson takes unstructured data, like this comment, and turns into structured data, like JSON. The tradebot would then need to take this data and make something of it.

However, having worked with Watson in other contexts, I've learned that you still have to make a lot of decisions about what data to use, how important it is, etc. This is where the limitations of current AI comes in: design decisions still have to be made when setting it up, and those decisions will have a substantial impact on how the AI operates.

For example, if part of the AI's goal is to make tradeable decisions on news, is it merely going to attempt to correlate news with a positive and negative sentiment from different sources (i.e. a sell recommendation from 1 author on Seeking Alpha vs. a buy rec another), or does it correlate specific technical analysis with tradeable decisions? Is it looking for news about a company, or does it try to factor in decisions from governments and central banks? What is relevant vs. extraneous data to throw out in an article?

Any of these could be done, but the questions is, what are you going to ask the developers to include in your AI? And do you know what the implication of each decision could be?

A Sentinel's picture

We agree (I’ve never used Watson though) but I strongly suspect that this isn’t the deus ex machina that they’re expecting. You and I know the ropes — there are plenty more. But no matter what non-compete / nondisclose you sign, if you go to Goldman, say, or the Saudis- and vomit up their good ideas for a nice check or two ... HOW THE HELL would they EVER figure it out??

There’s zero safety unless you develop something really new and implement it in one career cycle, don’t publish it or blab... then maybe.

techpriest's picture

I'll give an example and see if this helps:

There's a Watson-based "robot lawyer" who can look up all case law, legislation, and regulations, and people hailed this as an amazing new thing. But after reading the specs, in reality Watson is doing three things:

1) Speech-to-text and deciphering what you want to look for.

2) A search engine that matches your keys with tags calculated for each document (This isn't new, but Watson is the cheapest commercial version at ~$7/1,000 documents), and generating a list of relevant documents.

3) Text-to-speech that informs you it found documents.

In other words, its a glorified version of Apple's Siri. There are some useful aspects to such a thing, since you might be getting a better search engine, but it isn't going to deliver an understanding of the documents.

I see Watson as not necessarily something new, so much as a cheap commercialization of what Google, Apple, and other big players have already been doing for some time. That doesn't make you Google, it just gives you similar tools.

A Sentinel's picture

Thanks for your added info. I assumed it was some sort of machine learning suite built around old fashioned hierarchical db stuff on first read. So it mines text and then talks? Seems like a bit of flashy bling.

abbottmd's picture

i have programmed some neural networks (using keras, with theano and tensorflow backends) and have seen the accuracy improve in later iterations. However, the converse is that these models tend to overfit to the training set which I would bet is exactly what happened in this stock trading algo. Once it sees different data, performance drops off a cliff. However, if they are updating the model it coud still learn from these mistakes... not with my money though. However, if you think it is destined to lose, you could program a competitive AI that does the opposite and by definition it should win, minus trading costs.

any_mouse's picture

Thinking that the man made Markets are not comprised of ... Humans.

Fear. Greed. Exuberance. Parts of healthy markets. Not parts of binary logic.

The programming is only as good as the programmers.

Evidently, the humans that designed this AI are probably not very good at trading. They sought to create a machine that would somehow do it better than they could.

And they were depending on the machine to learn better trading algorithms by itself.

GIGO rules.

tribune's picture

eventually ai will suffer from herding and there is always the behavior as more ai comes to market profits in specific areas will get competed away (spread between the ai). also all these smart machines may also come to sell decisions at the same time also..

A Sentinel's picture

It doesn’t follow a declining trend. It gets stronger.

It is true that particular formula configurations (say trading strategies) age and lose edge — but that’s a staticmethod being copied more than an good tool. Take Bayes Networks (or various sorts) they aren’t bad, but they were the shit 12 - 15 years back. Microsoft was incredibly proud of their expertise. Now neural nets are the rage. That’s newer technology but still a couple decades in evolution (I believe, personally, that deep learning is fine for object id, but lacks an edge in more valuable tasks where other technology just does better.

Deplorable's picture

Just like those huge swarms of birds that fly chaotically across the sky. 

They never fly upward in a linear direction....why is that? 

Deep_Out-of-the-Money's picture

“It has no emotions: it hasn’t felt fear during a crash nor euphoria when markets are up.”

 

Nor does a sociopath. If that is all it took to be a good investor/trader...

kwaremont's picture

artificial intelligence vs. natural stupidity

Old Poor Richard's picture

Listen, and understand. That A.I. trading bot is out there.  It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are bankrupt.

jmack's picture

So like every other crack addled stock broker.

W.O.T.'s picture

"I am looking for Sara, to con her."

 

jmack's picture

To paraphrase Buffet, 'I never invest in a stock picker i dont understand'.

Bernie Madolf's picture

Wow so an algorithm is underperforming other algorithms

Big news

O boy

Catullus's picture

I worked for a company where Uncle Warren gave us money to bail us out. No algo does what he did.

Make no mistake. He makes his money as a loan shark. Equities are for sheep

W.O.T.'s picture
  • The computery bastard probably figured out how to talk to other AI programs who are subsequently shorting the sh*# out of the ETF!  He knows something we don't!  Cany you try an AI for securities fraud?
dumdum's picture

 

Humans! Don't you just 'luv em.' Everything has to be complicated.

I discovered an amazingly accurate indicator 25 years ago. I've been using it ever since. 

It's called a 200 day simple moving average. The K.I.S.S. principle works wonders.

It also stops you from straying onto the wrong side of the trend.

GeoffreyT's picture

Yawn... Deep learning is almost always just "Principle Components for CompSci grads who don't know what Principle Components is, and with no analysis or understanding of the statistical properties of the resulting estimators".

Similarly, Machine Learning is "Least-squares regression  for CompSci grads who don't know what Least Squares regression is, and with no analysis or understanding of the statistical properties of the resulting estimators".

.

You know how MBAs learn how to use the regression tool in Excel, and think that means that the regressions they perform are sensible?

Well, CompSci grads learn how to use a Python library to minimise a loss function, and then do all sorts of stupid shit that misapplies linear regression.

Things like

  • specifying a functional form that's not linear;
  • specifying a functional form that is not stationary or where the two sides of the equation have different orders of integration;
  • specifying a regressor matrix that's not full rank...

Imagine a list of things that would result in an instant 'F' in a sophomore econometrics assignment, then do all that shit, and call the result 'Machine Learning' (at the same time as calling what you do 'software engineering' instead of 'coding').

ML is OLS with a limited dependent variable, where the only analysis of the results is a confusion matrix (i.e., false and true politives and negatives) and a very basic test of sub-sample parameter stability (i.e., examining the confusion matrix for the estimation sample, compared to the testing sample).

It's literally just that, and anybody with significant training in statistics or econometrics can see that immediately.

(In the same way, folks who want to be called 'software engineers' want their ginchy little Google Maps display layer to be called 'Cartography' - and 'Remote Sensing' if you change the background to satellite imagery; they also want to be called a hacker if they've ever retweeted Anonymous)

.

ML and DL are Statistical Learning's retarded cousins, who nevertheless have great marketing.

SL's been around for over a generation, and does the same sort of thing but actually analyses the statistical properties of the resulting estimators.

You can see the difference if you compared course outlines: for SL (which is usually taught in Mathematics or Statistics departments) the course outline reads as if it was written by a statistician; the ML/DL course outline reads as if it was written as a deck for a presentation to a VC group.

.

Generally, ML and DL are only used when there is nothing on the line - if you want your site to have a recommender system (getting people to click to buy shit) and you don't just want to use product classifications, it might add a few basis points to conversion rates and do so for a couple hundred bucks (because any schlub who can code in Python, can implement a classifier).

In those situations, unsupervised feature extraction (i.e., principle components analysis with no constraints on the choice set) is a piss in the hand and can do very little harm.

On the other hand: using the equivalent of badly-implemented, poorly-analysed regression in a situation where some shit will be genuinely at risk... that's fucking insane and will only ever be done by charlatans trying to make bank.

So that means that the implementation being discussed in the article is purely marketing.

.

I have some expertise in this shit - like @abbottmd above, I've recently done "deep learning" projects using Tensorflow (specifically inception) - in my case, for imagery analysis.

Most recently I retrained the final couple of layers of inception to recognise forestry plantations (at different stages of growth, and by hardwood/softwood class) from high-res (10cm/px-50cm/px) RGB-only aerial imagery. (If I was doing it again I would use tf-slim instead of inception: tf-slim was not available when we did the project).

It did a great job - better-than-human levels of precision, several thousand times faster than a trained human.

If we had had access to high-res imagery with a NIR band it would have been even more robust - we had LANDSAT and Sentinel-2 multispectral, but that's only 30m/px and 10m/px respectively (LANDSAT can be increased to almost 15m/px using pancrhomatic sharpening). Using t he lower-res imagery enabled use of a few different  vegetation index approaches to act as a first-filter for regions that had no vegetation (or were 100% pasture or broadacre crops).

To put numbers on it: the project examined 20 years' worth or aerial imagery, for every land parcel in a state (2.7 million land parcels), in six hours - and and was 99.3% accurate in identifying parcels that included plantation forestry with a tiny, species-specific false-positive rate (it slightly overidentified hardwoods in state forests because of coppicing: we weren't required to report on state forests anyhow).

A separate algorithm was trained to segment images containing plantation forestry, into their plantation and non-plantation segments (removing roads, tracks, firebreaks, pasture, structures and background native vegetation). Again, it did the job spectacularly, and fast.

.

Anyhow... that's just by way of indicating that I know my arse from a hot rock when it comes to ML/DL, and I am not hostile to its use as a tool so long as its limitations are understood.