30-03-2023

The Return of Location? Can AI Reshape the Real Estate Landscape

Many think the last week, the first week of the GPT-4 model (and the new ChatGPT with a new core module), is the “First Impact” of the new era of AI. Although previous versions of ChatGPT amazed users with their human-like conversation abilities, they still felt like "chatbots": providing seemingly real but made-up information, struggling to verify information, and being almost entirely incapable of handling above-high school-level mathematical reasoning, among other limitations.

However, everything seemed to change overnight as GPT-4 was released. The new GPT model not only has an improved ability to generate human-like text and generalize better across multiple languages and dialects, but it also exhibits emerging logical reasoning capabilities: information synthesis, understanding and solving math and science problems, recognizing and explaining humor and memes, and many more. The new model is so powerful that people have to reflect on the very definition of language and intelligence (Large Language Models like ChatGPT might completely change linguistics), and, of course, a dim future that numerous job positions would be rendered useless. From OpenAI Research, here is a list of jobs that would be lost due to AI.

Alex11

Size matters

Why has ChatGPT improved so much? In short, size.

The number of parameters roughly represents an AI model's "complexity." Ideally, more parameters would allow the model to capture and represent more complex relationships, in other words, get smarter. Despite the challenges of large models—requiring huge datasets, excellent engineering, and enormous computational resources—researchers continue to pursue ever-larger models, competing for machine intelligence breakthroughs.

The figure below shows the increase in the scale of deep learning models over the years. GPT-4 is at the top with 170 trillion parameters compared to GPT-3’s 175 billion parameters. We have all reasons to believe this advantage won’t last long, as all major tech giants are diving headfirst into the world of large models.

Alex12

It is important to note that ChatGPT only represents progress in natural language modeling. Other areas of AI, such as computer vision, have also experienced similar revolutionary advances. The text-to-image models represented by Midjourney, DALLE, and StableDiffusion are capable of generating not only fictitious images but also evolving countless forms and artistic styles (Whether this can be called "artistic creativity" is more of a semantic question than a factual one). For many job positions, AI's replacement of humans has evolved from science fiction to reality.

The return of location?

What about the real estate industry in the new era of AI? We firstly asked ChatGPT and the answers were a bit boring - yes, we can get a prettier building design using DALLE; yes, we can use an AI model for better valuation; we can even get “enhanced customer experience” by chatting with a ChatGBT broker - but this probably won’t make it any easier to buy/rent a home.

We think the impact of AI on real estate should be more fundamental.

If we think about the impact of the COVID pandemic and remote working, we see a mindset of “locationless”: if most communications can be done online, then The COVID pandemic and remote working have fostered a "locationless" mindset: if most communication can be done online, why bother with designated physical spaces?

However, in the age of generative AI, virtual experiences will become increasingly immersive and ubiquitous: email, text messages, and blogs written by ChatGPT, nonexistent images generated by Midjourney, virtual videos, or AR effects created by more advanced AI. It will leave human-to-human interactions and physical environments as the last bastions of truly "real" experiences. As a result, the importance of location in real estate is likely to regain prominence, with people seeking authentic connections to their surroundings. The value of being grounded in a physical space and experiencing the world through our senses will probably once again capture our collective imagination. In this context, "location, location, location" takes on renewed significance, as the quest for genuine experiences drives us back to physical space.

Appendix: no magic in this transformer

If you are interested in how ChatGPT works, the following part is an intuitive introduction to its core module - the GPT model. GPT stands for “Generative pre-trained Transformer”. It’s one of many AI language models based on an architecture called Transformer.

The Transformer architecture was introduced by Vaswani et al. in a 2017 paper titled "Attention Is All You Need." The core idea of this paper is that instead of letting the algorithm “read” the sequence (i.e. a sequence of words: a book) sequentially (like reading a book word-by-word), we let it focus on all relevant parts using a mechanism called “attention” - just like a human reader's ability to focus on important information within a text and relate it to previously mentioned information. With the ability to understand context, AI models based on the Transformer have got revolutionary progress.

For the GPT model, it firstly read tons of books and web pages (i.e. GPT-3 read 570GB of text, approximately equivalent to 1,000,000 average-sized books). In this process, the model learns grammar, syntax, semantics, and some facts about the world.

After that, the model is trained on specific tasks, such as translation, sentiment analysis, or question-answering.

The final step is to teach the GPT model how to interact with human users (the ‘Chat’ part, see the diagram below). Researchers firstly “teach” AI what is a “good” response during a chat: using nice words instead of toxic language, being impartial instead of extreme, etc. Then the smart move is to teach the AI how to “reward” itself when it gives a good response, and optimize its behavior using reinforcement learning.

Alex13

Picture from Ouyang, Long, et al. “Training language models to follow instructions with human feedback.” arXiv preprint arXiv:2203.02155 (2022).

This is just an intuitive explanation. The real model is more complex. <some recommendations to know more about GPT model>