Profile cover photo
Profile photo
John Baez
57,712 followers
57,712 followers
About
Posts

Post has attachment
How math is changing

People are starting to use algebraic topology — basically the art of counting holes of different kinds — to study patterns in data. It's called topological data analysis.

I just learned a lot about this at a conference called "Applied Algebraic Topology 2017". But my own talk was not about applications. Instead, it was about how algebraic topology is changing our view of mathematical reality!

What's happening is that sets are slowly becoming less important, and spaces are becoming more fundamental. In the process, the concept of 'space' is getting more flexible.

So far this is mainly happening in the most sophisticated branches of pure math. You might not notice if you're working down in the trenches. But I think topological data analysis is a sign that this trend is spreading. We can now describe complicated spaces with interesting holes using a finite amount of data — suitable for computing.

Where will it lead? Nobody knows! You can see the story so far in my talk slides. But they're pretty hard-hitting, since I was talking to folks who know algebraic topology. So it might be better to hear what Yuri Manin said when he was asked what the future holds.

He started by saying "I don’t foresee anything extraordinary in the next twenty years." But then he described something pretty extraordinary:

...after Cantor and Bourbaki, no matter what we say, set theoretic mathematics resides in our brains. When I first start talking about something, I explain it in terms of Bourbaki-like structures: topological spaces, linear spaces, the field of real numbers, finite algebraic extensions, fundamental groups. I cannot do otherwise. If I’m thinking of something completely new, I say that it is a set with such-and-such a structure; there was one like this before, called this-and-that; another similar one was called this-and-this; so I apply slightly different axioms, and I will call it such-and-such. When you start talking, you start with this. That is, at first we start with the discrete sets of Cantor, upon which we impose something more in the style of Bourbaki.

But fundamental psychological changes also occur. Nowadays these changes take the form of complicated theories and theorems, through which it turns out that the place of old forms and structures, for example, the natural numbers, is taken by some geometric, right-brain objects. Instead of sets, clouds of discrete elements, we envisage some sorts of vague spaces, which can be very severely deformed, mapped one to another, and all the while the specific space is not important, but only the space up to deformation. If we really want to return to discrete objects, we see continuous components, the pieces whose form or even dimension does not matter. Earlier, all these spaces were thought of as Cantor sets with topology, their maps were Cantor maps, some of them were homotopies that should have been factored out, and so on.

I am pretty strongly convinced that there is an ongoing reversal in the collective consciousness of mathematicians: the right hemispherical and homotopical picture of the world becomes the basic intuition, and if you want to get a discrete set, then you pass to the set of connected components of a space defined only up to homotopy.

That is, the Cantor points become continuous components, or attractors, and so on — almost from the start. Cantor’s problems of the infinite recede to the background: from the very start, our images are so infinite that if you want to make something finite out of them, you must divide them by another
infinity.

Here's the abstract of my talk. This should either make you curious enough to look at the slides, or confused enough to be happy you didn't:

Abstract. As algebraic topology becomes more important in applied mathematics it is worth looking back to see how this subject has changed our outlook on mathematics in general. When Noether moved from working with Betti numbers to homology groups, she forced a new outlook on topological invariants: namely, they are often functors, with two invariants counting as "the same" if they are naturally isomorphic. To formalize this it was necessary to invent categories, and to formalize the analogy between natural isomorphisms between functors and homotopies between maps it was necessary to invent 2-categories. These are just the first steps in the "homotopification" of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract "spaces" (for example, homotopy types) are considered as fundamental as sets. It is natural to wonder whether topological data analysis is a step in the spread of these ideas into applied mathematics, and how the importance of "robustness" in applications will influence algebraic topology.

The slides are here:

http://math.ucr.edu/home/baez/alg_top/

#geometry

Post has attachment
THE BLIND LEADING THE BLIND
Photo

Post has attachment
How (not) to write mathematics

Some tips from the mathematician John Milne:

If you write clearly, then your readers may understand your mathematics and conclude that it isn't profound. Worse, a referee may find your errors. Here are some tips for avoiding these awful possibilities.

1. Never explain why you need all those weird conditions, or what they mean. For example, simply begin your paper with two pages of notations and conditions without explaining that they mean that the varieties you are considering have zero-dimensional boundary. In fact, never explain what you are doing, or why you are doing it. The best-written paper is one in which the reader will not discover what you have proved until he has read the whole paper, if then.

2. Refer to another obscure paper for all the basic (nonstandard) definitions you use, or never explain them at all. This almost guarantees that no one will understand what you are talking about (and makes it easier to use the next tip). In particular, never explain your sign conventions --- if you do, someone may be able to prove that your signs are wrong.

3. When having difficulties proving a theorem, try the method of "variation of definition"---this involves implicitly using more that one definition for a term in the course of a single proof.

4. Use c, a, b respectively to denote elements of sets A, B, C.

5. When using a result in a proof, don't state the result or give a reference. In fact, try to conceal that you are even making use of a nontrivial result.

6. If, in a moment of weakness, you do refer to a paper or book for a result, never say where in the paper or book the result can be found. In addition to making it difficult for the reader to find the result, this makes it almost impossible for anyone to prove that the result isn't actually there. Alternatively, instead of referring to the correct paper for a result, refer to an earlier paper, which contains only a weaker result.

7. Especially in long articles or books, number your theorems, propositions, corollaries, definitions, remarks, etc. separately. That way, no reader will have the patience to track down your internal references.

8. Write A==>B==>C==>D when you mean (A==>B)==>(C==>D), or (A==>(B==>C))==>D, or.... Similarly, write "If A, B, C" when you mean "If A, then B and C" or "If A and B, then C", or .... Also, always muddle your quantifiers.

9. Begin and end sentences with symbols wherever possible. Since periods are almost invisible (and may be mistaken for a mathematical symbol), most readers won't even notice that you've started a new sentence. Also, where possible, attach superscripts signalling footnotes to mathematical symbols rather than words.

10. Write "so that" when you mean "such that" and "which" when you mean "that". Always prefer the ambiguous expression to the unambiguous and the imprecise to the precise. It is the readers task to determine what you mean; it is not yours to express it.

11. If all else fails, write in German.

These helpful tips are from his webpage:

http://www.jmilne.org/math/tips.html

He has some footnotes, including this for item 11:

The point is that most mathematicians find it very difficult to read mathematics in German, and so, by writing in German, you can ensure that your work is inaccessible to most mathematicians, even though, of course, German is a perfectly good language for expressing mathematics.

Hmm. He could have chosen Basque, but he chose German.

I thank Nina Otter for pointing out this article.
Photo

Post has attachment
The Trump presidency that we fought for, and won, is over. - Steve Bannon

Bannon said this because he got kicked out. But it could be truer than he knows.

First, Trump's Manufacturing Jobs Initiative and Strategic and Policy Forum collapsed as leaders of big companies got scared of being associated with him.

It began last Saturday when the head of the drug company Merck left, saying:

As CEO of Merck and as a matter of personal conscience, I feel a responsibility to take a stand against intolerance and extremism.

Trump blasted him with a tweet. By Tuesday, leaders from Intel, Under Armour, the Alliance for American Manufacturing, the AFL-CIO, 3M and Campbell Soup Company had resigned. Trump tweeted:

For every CEO that drops out of the Manufacturing Council, I have many to take their place. Grandstanders should not have gone on. JOBS!

But Tuesday morning he switched gears again on the Charlottesville neo-Nazi rally, declaring a moral equivalence between both sides and garnering instant praise from KKK leader David Duke:

Thank you President Trump for your honesty & courage to tell the truth about Charlottesville [...]

So, the whole Strategic and Policy Forum had a phone conference and decided to jump off Trump's sinking ship. The head of General Motors explained why:

General Motors is about unity and inclusion and so am I. Recent events, particularly those in Charlottesville, Virginia, and its aftermath, require that we come together as a country and reinforce values and ideals that unite us — tolerance, inclusion and diversity – and speak against those which divide us – racism, bigotry and any politics based on ethnicity.

The Manufacturing Jobs Initiative also collapsed, and Trump hastily disbanded both groups later that day.

Republicans in Congress continued standing like deer caught in the headlights of an oncoming car. Behind the scenes they must surely be making plans. Senator Bob Corker, a Republican from Tennessee, made the news by admitting the obvious:

I do think there need to be some radical changes. The president has not yet been able to demonstrate the stability nor some of the competence that he needs to demonstrate in order to be successful.

On Friday, the President's Committee on Arts and the Humanities all quit. In the first paragraph of their resignation letter, the first letter of each word spells RESIST. One of the member, Kal Penn, said this in an interview:

It became clear that the government became inoperative under this particular presidency. A lot of the work and the agencies have been frozen. There’s a big waste of taxpayer dollars. We had hope, but the president made comments that quite literally were in support of the domestic terrorists.

It’s one thing to say you want to serve the programs you were appointed to serve, regardless of politics, but after a certain point . . . we just don’t want our names attached to this in any way.

On Friday, the billionaire Carl Icahn quit serving as Trump's special advisor - apparently because Democrats were pointing out his conflicts of interest.

So the Trump team has been unraveling rapidly. Meanwhile, Trump's advisor Steve Bannon, who knew he was about to be fired, gave a fascinating interview in which he undercut a lot of Trump's recent moves. Notably:

There’s no military solution [to North Korea’s nuclear threats], forget it. Until somebody solves the part of the equation that shows me that ten million people in Seoul don’t die in the first 30 minutes from conventional weapons, I don’t know what you’re talking about, there’s no military solution here, they got us.

and

Ethno-nationalism—it's losers. It's a fringe element. I think the media plays it up too much, and we gotta help crush it, you know, uh, help crush it more. These guys are a collection of clowns.

He was fired on Friday.

A lot of people thought Steve Bannon was behind Trump's move to appeal to "ethno-nationalists" (also known as racists) like the ones who rallied in Charlottesville. It would be very interesting if that's not true. I don't really understand Bannon; perhaps he's just an "economic nationalist" as he seems to claim in the recent interview. If so, it makes this question all the more pressing: why did Trump shoot himself in the foot by not coming out and forcefully condemning neo-Nazis and the KKK?

One possibility is that he's a racist... but that doesn't fully explain it, because a simple sense of political survival is enough to keep most racist politicians in the US in line these days. They happily pass laws designed to make harder for black people to vote, etcetera, but still they have the sense to condemn the KKK when it's called for, since nowadays that's required if you're going win elections or even get anything done.

One possibility is that Trump is just stupid, but that doesn't explain it either.

One possibility is that Trump can't stand being forced to do anything, so after he knuckled under and dutifully read a statement against racism from a teleprompter on Monday, he became angry and couldn't resist doubling down on his original position on Tuesday.

This seems more plausible, but it doesn't answer why he had that position in the first place. Since when does not condemning neo-Nazis and the KKK seem like a good idea?

Here's one theory: Trump feels he's going to be pushed out of the presidency, and he wants some angry crowds with weapons to stick up for him when that day comes.

If that seems implausible, remember how he spoke at a rally of motorcycle gangs last year:

https://www.washingtonpost.com/local/dc-politics/donald-trump-to-speak-at-rolling-thunder-rally/

https://www.washingtonpost.com/news/worldviews/wp/2016/06/01/how-the-loyal-support-of-biker-gangs-unites-trump-and-putin/

It would be good to discuss this in a rational polite way, but that's not easy. I find that moderating any discussion of Trump is a full-time job, and unpleasant too. So, as usual for posts on this subject, I won't enable comments. Too bad - people need to think about what is going on as events heat up. If you want to reshare this and manage your own discussion, please do.
Photo

Post has attachment
Math made difficult: how to multiply using trig

Back in the 1500's, people on long sea journeys navigated using the stars. They needed big tables of trig functions to do this!

These tables were made by astronomers. Those folks did thousands of calculations. Often they needed to multiply large numbers! That was tiring... but around 1580, they figured out a clever way to approximately multiply large numbers using tables of trig functions.

Here's an example:

Say you want to multiply 105 and 720. You do this:

• Shift the decimal point in each one to get numbers less than 1. You get 0.105 and 0.720.

• Look up angles whose cosines are these numbers. Use a table! The cosine of 84° is about 0.105, and the cosine of 44° is about 0.720

• Add and subtract these angles: 84° + 44° = 128° and 84° - 44° = 40°.

• Use a table to look up the cosines of these new angles: -0.616 and 0.766.

• Take their average, which is 0.075.

• Scale it back up. At the beginning of this game you took 105 and 720 and shifted the decimal point 3 places to the left in each. So now, shift the decimal point 3+3 = 6 places to the right! The answer is 75,000.

It's not exactly right, but it's pretty close!

Puzzle. Why is it close?

This wacky-sounding method has a wacky-sounding name: it's called prosthaphaeresis.

Tables of logarithms are easier. To multiply two numbers you just look up their logs, add them, and then look up the number whose log is that! But logs were invented only later, in 1614.

So for a while, prosthaphaeresis was the way to go!

And Napier, the guy who invented logs, did it after studying this earlier method.

It goes to show: a clunky way of doing something is often the first step toward something less clunky. You can't be slick right away!

#geometry
Photo

Post has attachment
P ≠ NP?

+Alok Tiwari pointed out a new paper by Norbert Blum, which claims to solve a famous math problem. So if this paper is wrong, don't blame me. Blame him. ʘ‿ʘ

• Norbert Blum, A solution of the P versus NP problem, https://arxiv.org/abs/1708.03486.

Just kidding! Most papers that claim to solve hard math problems are wrong: that's why these problems are considered hard. Alok Tiwari knows this.

But these papers can still be fun to look at, at least if they're not obviously wrong. It's fun to hope that maybe today humanity has found another beautiful grain of truth.

I'm not an expert on the P = NP problem, so I have no opinion on this paper. So don't get excited: wait calmly by your radio until you hear from someone who actually works on this stuff.

I found the first paragraph interesting, though. Here it is, together with some non-expert commentary. Beware: everything I say could be wrong!

Understanding the power of negations is one of the most challenging problems in complexity theory. With respect to monotone Boolean functions, Razborov [12] was the first who could shown that the gain, if using negations, can be super-polynomial in comparision to monotone Boolean networks. Tardos [16] has improved this to exponential.

I guess a Boolean network is like a machine where you feed in a string of bits and it computes new bits using the logical operations 'and', 'or' and 'not'. If you leave out 'not' the Boolean network is monotone, since then making more inputs equal to 1, or 'true', is bound to make more of the output bits 1 as well. The author is saying that including 'not' makes some computations vastly more efficient... but that this stuff is hard to understand.

For the characteristic function of an NP-complete problem like the clique function, it is widely believed that negations cannot help enough to improve the Boolean complexity from exponential to polynomial.

A bunch of nodes in a graph are a clique if each of these nodes is connected by an edge to every other. Determining whether a graph with n vertices has a clique with more than k nodes is a famous problem: the clique decision problem.

The clique decision problem is NP-complete. This means, among other things, that if you can't solve it with any Boolean network whose complexity grows like some polynomial in n, then P ≠ NP.

(Don't ask me what the complexity of a Boolean network is.)

I guess Blum is hinting that the best monotone Boolean network for solving the clique decision problem has a complexity that's exponential in n. And then he's saying it's widely believed that not gates can't reduce the complexity to a polynomial.

Since the computation of an one-tape Turing machine can be simulated by a non-monotone Boolean network of size at most the square of the number of steps [15, Ch. 3.9], a superpolynomial lower bound for the non-monotone network complexity of such a function would imply P ≠ NP.

Now he's saying what I said earlier: if you show it's impossible to solve the clique decision problem with any Boolean network whose complexity grows like some polynomial in n, then you've shown P ≠ NP. This is how Blum intends to prove P ≠ NP.

For the monotone complexity of such a function, exponential lower bounds are known [11, 3, 1, 10, 6, 8, 4, 2, 7].

Should you trust someone who claims they've proved P ≠ NP, but can't manage to get their references listed in increasing order?

But until now, no one could prove a non-linear lower bound for the nonmonotone complexity of any Boolean function in NP.

That's a great example of how helpless we are: we've got all these problems whose complexity should grow faster than any polynomial, and we can't even prove their complexity grows faster than linear. Sad!

An obvious attempt to get a super-polynomial lower bound for the non-monotone complexity of the clique function could be the extension of the method which has led to the proof of an exponential lower bound of its monotone complexity. This is the so-called “method of approximation” developed by Razborov [11].

I don't know about this. All I know is that Razborov and Rudich proved a whole bunch of strategies for proving P ≠ NP can't possibly work. So he's a smart cookie.

Razborov [13] has shown that his approximation method cannot be used to prove better than quadratic lower bounds for the non-monotone complexity of a Boolean function.

So, this method is unable to prove a problem can't be solved in polynomial time. Bummer!

But Razborov uses a very strong distance measure in his proof for the inability of the approximation method. As elaborated in [5], one can use the approximation method with a weaker distance measure to prove a super-polynomial lower bound for the non-monotone complexity of a Boolean function.

This reference [5] is to another paper by Blum. And in the end, he claims to use similar methods to prove that the complexity of any Boolean network that solves the clique decision problem must grow faster than a polynomial.

So, if you're trying to check his proof that P ≠ NP, you should probably start by checking that other paper!

The picture below, by Behnam Esfahbod on Wikicommons, shows the two possible scenarios. The one at left is the one Norbert Blum claims to have shown.
Photo

Post has attachment
A lot of Republicans have come out and denounced the racist rally and terrorism in Charlottesville in clear terms:

"White supremacy" crap is worst kind of racism - it's EVIL and perversion of God's truth to ever think our Creator values some above others. – Mike Huckabee

We should call evil by its name. My brother didn't give his life fighting Hitler for Nazi ideas to go unchallenged here at home. – Orrin Hatch

The Nazis, the KKK, and white supremacists are repulsive and evil, and all of us have a moral obligation to speak out against the lies, bigotry, anti-semitism, and hatred that they propagate. Having watched the horrifying video of the car deliberately crashing into a crowd of protesters, I urge the Department of Justice to immediately investigate and prosecute this grotesque act of domestic terrorism. – Ted Cruz

Very important for the nation to hear POTUS describe events in Charlottesville for what they are, a terror attack by white supremacists – Marco Rubio

I don't agree with these guys about much. But for this I applaud them!

On the other hand, here's what the famous former Ku Klux Klan leader said at the rally:

This represents a turning point for the people of this country. We are determined to take our country back, we're going to fulfill the promises of Donald Trump, and that's what we believed in, that's why we voted for Donald Trump, because he said he's going to take our country back and that's what we gotta do. – David Duke

And then there's our president, who blames people "on many sides".

And then there's the neo-Nazi website The Daily Stormer:

Trump comments were good. He didn't attack us. He just said the nation should come together. Nothing specific against us. [...] No condemnation at all. When asked to condemn, he just walked out of the room. Really, really good. God bless him.
Photo

Post has attachment
DNA hackers

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer.

It's not a realistic threat now - but it's cute enough to make a cute SF story, at least. Here's the idea, as described by Wired:

The researchers started by writing a well-known exploit called a "buffer overflow," designed to fill the space in a computer's memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.

But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA's basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer's parallel processing.

When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.

The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that's used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.

But here's the part that really makes it of merely theoretical interest right now:

Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program's open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. "A lot of this software wasn't written with security in mind," Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit's code.

Luckily the article admits it's just speculative:

Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. "This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability," writes Jason Callahan, the company's chief information security officer "We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing."

Here's the article:

• Andy Greenberg, Biohackers encoded malware in a strand of DNA, Wired, 10 August 2017, https://www.wired.com/story/malware-dna-hack/

I apologize for the sexist, cheesy picture here. It's the best I could find for the theme of 'DNA hackers'.

#biology
Photo

Post has attachment
Electric moonlight

Listen to Tina S absolutely shred the third movement of Beethoven's Moonlight Sonata. She is now 17, and getting better, more intense and more nuanced, each year.

She says:

I am part of a generation that has a huge advantage over past generations. The tools of communication today allow people to publish their work, their passion, and be recognized by the whole world without moving from their chair.

For me, the guitar is a game. This is what allows me to play every day with desire and pleasure, without turning this game into work.

#music

Post has attachment
A wonderful appearance of a wonderful number

Suppose you take the complete graph on n vertices and randomly assign a number between 0 and 1 to each edge. Call these numbers lengths. Suppose they are independent and uniformly distributed random variables.

Now look for a spanning tree: a bunch of edges that include all the vertices, but don't form any loops. Find a minimal one: one where the sum of all the edge lengths is as small as possible!

What's the total length of all the edges, in this minimal spanning tree?

Of course, it's random. But Frieze showed that as n → ∞, it converges to this number:

ζ(3) = 1/1³ + 1/2³ + 1/3³ + ...

More precisely, for any ε > 0, the probability that this total length differs from ζ(3) by more than ε approaches zero as n → ∞,

This number ζ(3) is called Apéry's constant because Apéry proved what everyone had suspected all along: it's irrational.

I find this fact to be a wonderfully fundamental appearance of Apery’s constant. It was proved by Frieze in 1983, and he published it here:

• Alan M. Frieze, On the value of a random minimum spanning tree problem, Discrete Applied Mathematics 10 (1985), 47–56. Available for free at http://www.sciencedirect.com/science/article/pii/0166218X85900587

Apery’s constant also shows up when you compute the electron's gyromagnetic ratio using quantum electrodynamics. An electron is a little magnet, and its gyromagnetic ratio says how strong this magnet is. But the answer to this problem is complicated: it's the sum of infinitely many terms, and Apery’s constant shows up in the 2nd and 3rd terms. So I find this less thrilling, since it's less clean... and also less surprising, perhaps because I understand it a bit better.

Here's another really nice way that Apery’s constant shows up. It's the reciprocal of the probability that 3 positive integers chosen at random are relatively prime!

Now, "choosing a positive integer at random" doesn't really make sense. So here's what you do. For each n, compute the probability that 3 positive integers less than n, chosen uniformly at random, are relatively prime. Then take the limit as n → ∞. Voilà: you get 1/ζ(3).

https://en.wikipedia.org/wiki/Apery's_constant
Photo
Wait while more posts are being loaded