Lance Fortnow's Blog, page 10

January 5, 2025

The Betty White Award for 2024

In Jan of 2023 I estabalished the Betty White Award, see here which is given to people who died late in the prior year and hence won't be in the  those who we lost in year X articles. I also gave out a few from prior years. Here are past winners, some in retrospect.

2006: James Brown and Gerald Ford. See my post here. The post was not on the Betty White award since Betty White had not died yet. But the idea is there.

2021: Betty White and Bishop Tutu. See my post here. I didn't have the award yet but I do now. Can person X win the X award? I say yes. Its rare since usually the person an award is named after is dead, but in this case that's kind of the point.

2022: Pele (Soccor player), Barbara Walters (broadcast journalist), Pope Emeritus Benedict. See my post here. Three famous people! To quote my post: One was prominent in one of the worlds largest religions. The others were a broadcast journalist and a former Pope. 

2023: Tommy Smothers (Singer and Comedian). See my post here. Since I collect novelty songs I knew who he was, but I think most people did not. 

2024: I began writing this post on Dec 28. Bad idea- the whole point of the award is that we should WAIT until the year is OVER before doing retrospectives. In fact, the award was going to go to Ricky Henderson (famous baseball player), Greg Gumbel (sportscaster), and Olivia Hussey (Actress). The last two I had never head of until they died, but I didn't want to just give it to one person. 

Then on Dec 29 Jimmy Carter died. Okay then. Greg and Olivia will still get Honorable Mention.  After that but before Jan 1, Linda Lavin died who will also get Honorable Mention.

Here are the WINNERS of the Betty White Award for 2024:

Ricky Henderson A hall-of-fame baseball player who was truly a superstar. He died on Dec 20, 2024, at the age of 65. See his Wikipedia entry here. There are two criteria for the Betty White Award. 

a) Being famous enough so IF he had died earlier, he WOULD be in the those we lost in 2024 articles. On this criteria, Ricky Henderson is solid. Note that he holds the record for most stolen bases in a career by A LOT: RH has 1406, Lou Brock is second with  938.

b) Dying to late in the year to be on those lists. I checked- he is on some lists but not others. To NOT get the Betty White award because he died late but not late enough would be really sad. SO, even though on this criteria he is borderline, the judges have decided to give it to him.

Jimmy Carter A former president of the United States; however, he may be more known for his post-presidency work on charities. He won a Nobel Peace Prize in 2002. He died on Dec 29, 2024 at the age of 100. Providing a Wikipedia link sounds silly- you all know who he is. Here are some miscellany:

a) I was hoping he would last until Biden stepped down so there would be six living ex-presidents:

Carter, Clinton, Bush Jr, Obama, Trump, Biden.  I blogged about this here.

b) Carter is the prez who lived the longest. He also  had the longest marriage. Jimmy and Rosalyn Carter were married for 77 years. George and Barbara Bush were second with 73 years of marriage. 

c) Carter is the only president sworn into office by his nickname Jimmy (his `real' name is James). It annoyed me when BILL Clinton was sworn in as WILLIAM and when JOE Biden was sworn in as JOSEPH. If Jeb Bush had become prez he probably would have been sworn in by his nickname Jeb (his `real' name is John). Oh Well. 

Jimmy is Clearly famous enough and Clearly died late enough in the year. 


AND three Honorable Mentions:

Greg Gumbel A sportscaster, died on December 27, 2024 at the age of 78. See his Wikipedia entry here. SO, does he deserve it? 

a) Being famous. I've read that he is famous but frankly, I had never heard of him until I saw the obit and said  Betty White Award Contender.? That he had an obit on some news site IS an indicator of fame. And more to the point of the award, had he died a a few days later, in 2025, he would have been on the those who we lost in 2025 lists. If Jimmy Carter hadn't died he would have won the award (with Ricky Henderson and Olivia Hussey and Linda Lavin) so I decided to give him Honorable Mention.  My award, My rules. 

b) Dying late. OH YEAH! Dec 27 is very late. 

Olivia Hussey An actress, died December 27, 2024 at the age of 73. See her Wikipedia entry here. SO, does she deserve the award? 

The Being Famous and Dying late comments are identical to those for Greg Gumbel. 

So she also gets an Honorable Mention. 

Linda Lavin An actress, died December 29, 2024 at the age of 87. See her Wikipedia entry here. SO does she deserve the award? 

The Being Famous and Dying late comments are identical to those for Greg Gumbel. 

So she also gets an Honorable Mention. Also she gets credit for dying really late in the year.


 •  0 comments  •  flag
Share on Twitter
Published on January 05, 2025 12:03

January 2, 2025

My Drunken Theorem

Bill's SIGACT Open Problems Column remembering Luca Trevisan is out. I chose the problem of whether Promise-ZPP in P implies Promise-BPP in P, an extension of an earlier theorem by Luca and his co-authors, which showed that Promise-RP in P implies Promise-BPP in P. But now, let me share a story that I didn’t include in print.

In the mid-1990s, I receive an email from Luca saying that Noam Nisan had told him I’d come up with an easier proof of one of his theorems. Luca asked if he could use it in his upcoming paper. I had no idea what he was talking about.

Then, I vaguely remembered…

I was in Dagstuhl, back when we’d hang out in a room meant for drinking beer and wine. I had, shall we say, more than one good German Pilsner, when Noam came by and asked if I knew how to show that Promise-RP in P implies P = BPP. I mumbled something about how it might follow from Lautemann's proof that BPP is in the second level of the polynomial-time hierarchy. Lautemann’s proof uses the probabilistic method in both directions, which I thought might fit nicely into Promise-RP.

Now to all you kids out there: you should never drink and derive. A theorem you prove after a couple of beers usually falls apart when you sober up. But this time it turns out I was right—and I totally forgot about it until I got Luca’s email.

I never admitted this to Luca but did give him permission to include it in his paper with Andreev, Clementi, and Rolim. And they did.

However, Lautemann’s proof doesn’t tell us anything about Promise-ZPP, so that problem remains open. Go ahead, read Bill’s column, and give it a try. If you drink a couple of Warsteiners along the way, it may not help you prove the theorem—but at least you’ll enjoy some good beer.

 •  0 comments  •  flag
Share on Twitter
Published on January 02, 2025 05:30

December 23, 2024

Complexity Year in Review

Back in the day (circa 1989) we studied locally random reductions which would lead to all those exciting interactive proof results. Somehow locally random reductions got rebranded as locally correctable codes and this year's result of the year settled a long-standing open question. 

An Exponential Lower Bound for Linear 3-Query Locally Correctable CodesPravesh Kothari and Peter Manohar
Roughly if you want a code where each bit is a linear combination of three other appropriately-chosen random bits with constant error, you're going to need a very long code. More in Quanta

Things Bill wanted me to mention in this post: R(5), new Mersenne prime, Busy BeaverVazirani's delayed proofformal verification of the sum-check protocol and AI song generation.

2024 was quite a year, we saw a computational complexity theorist, Avi Wigderson, win the Turing Award and computer scientists win Nobel Prizes in both chemistry and physics. Also some elections, wars and college protests. It's all a prelude to a perfect storm for US higher education with the oncoming trains of the new administration, artificial intelligence, fiscal challenges and the demographic cliff. Hang on tight, it's going to be a bumpy ride.

We remember Rance Cleaveland, Peter Higgs, Thomas Kurtz, Phil Lewis, Steven Rudich, Frank Ryan, Jim Simons, Luca Trevisan, Dominic Welsh and Niklaus Wirth.

We thank all our guest posters and collaborators Eric Allender, Martin Bullinger, Max Burkes, James De Santis, Mohammad Hajiaghayi, Neil ImmermanValentine KabanetsHarry Lewis and Larry Washington.

Enjoy the holidays and we'll see you in January. 

 •  0 comments  •  flag
Share on Twitter
Published on December 23, 2024 04:21

December 18, 2024

Information is Physical?

I've heard a few times recently the phrase "Information only exists in a physical state". It come from the quantum computing world where they claim quantum changes the game when it comes to representing information.

As one who has spent his career studying theoretical information that has never and never will exist in a physical state, how can we reckon with such a statement? For starters let's consider the set of all primes--how does that infinite set exist in our finite world? 

Information is physical but not directly, but rather as its description. We can discuss a computational process or more generally a mathematical model that captures the set of all primes and we can and have store that description physically.

Let's consider a single prime, the recently discovered Mersenne prime \(2^{136279841}-1\). Note how we must describe the number in a very compressed format, certainly not as a collection of \(2^{136279841}-1\) ping pong balls or even \(2^{136279841}-1\) atoms, far more than the roughly \(2^{365}\) atoms in the observable universe.

In a similar fashion, a large language model stores information through its weights--not a direct encoding of the sentences it can generate. 

Now let's think of quantum computing. The quantum algorithm is always classically defined. All the information in quantum states has a classical description. An entangled quantum state may require an exponentially large explicit description, but the algorithm generating it provides a short classical physical description. So if we allow information to only physically represented by its description then it's hard to argue that quantum is somehow special. There are differences to how quantum works but when we try to simplify the message, it can confuse people into thinking quantum is more powerful than it really is.


 •  0 comments  •  flag
Share on Twitter
Published on December 18, 2024 09:04

December 15, 2024

Random Thoughts on AI (Human Generated)

 (I wrote this post without any AI help. OH- maybe not- I used spellcheck. Does that count? Lance claims he proofread it and found some typos to correct without any AI help.)

Random Thought on AI

I saw a great talk on AI recently by Bill Regli, who works in the field. 

Announcement of the talk: here

Video of the talk:  here

-----------------------------------------------
1) One item Bill R mentioned ws that AI requires lots of Energy so
3-mile Island is being reopened. See here.

Later I recalled the song

        The Girl from 3-Mile Island

to the tune of

        The Girl from Ipanema.

The song is in my audio tape collection but that is not useful so I looked for it on the web. The copy on YouTube doesn't work; however, this website of songs about 3-mile island here included it.

In the 1990's I was in charge of the Dept Holiday Entertainment since I have an immense knowledge of, and collection of, novelty songs- many in CS and Math.

Today- My talents are no longer needed as anyone can Google Search and find stuff. I did a blog on that here. I still have SOME advantage since I know what's out there, but not as much. Indeed, AI can even write and sing songs. I blogged about that and pointed to one such song here.

SO, some people's talents and knowledge are becoming obsolete.  On the level of novelty songs I am actually HAPPY that things change- I can access so much stuff I could not before. But humans becoming obsolete is a serious issue of employment and self worth. Far more serious then MACHINES TAKE OVER THE WORLD scenarios.

---------------------------------------------------------
2) When technology made farming jobs go away, manufacturing jobs took their place. That was true in the LONG run, but in the SHORT run there were starving ex-farmers. The same may happen now.

(ADDED LATER; someone emailed me that Machines taking over farming and other things has caused standards of living to go up. YES, I agree- in the LONG run very good, but in the short run people did lose their livelihoods.)

Truck Drivers and Nurses may do better than Accountants and Lawyers:

Self Driving trucks are 10 years away and always will be.
Nurses need to have a bedside manner that AI doesn't (for now?).

One ADVANTAGE of AI is that if it makes white collar workers lose jobs the government might get serious about

Guaranteed Basic Income, and

Univ. Health care

(ADDED LATER: someone emailed me that there GBI is not the way to go. Okay, then I should rephase as when white collar workers lose their jobs then the problem of a social saftey net will suddently become important.) 

Similar: If global warming makes the Cayman Island sink then suddenly Global Warming will be an important problem to solve.

------------------------------------------------
3) An example of AI taking away jobs is the Writers Strike.

OLD WAY: There were 10 people writing Murder She Wrote Scripts.

NEW WAY: AN AI generates a first draft and only needs 2 people to polish it.

KEY: In a murder mystery the guilty person is an innocuous character you saw in the first 10 minutes or a celebrity guest star. Sometimes the innocuous character is the celebrity guest star.

-------------------------------------------------
4) ChatGPT and school and cheating.

Calculator Scenario: We will allow students to use Chat GPT as we now allow calculators. Students are not as good at arithmetic, but we don't care.  Is Chat GPT similar?

Losing battle scenario: Ban Chat GPT

My solution which works--- for now: Ask questions that Chat GPT is not good at, allow chat GPT, insist the students understand their own work, and admit they used it. Works well in Grad courses and even senior courses. Might be hard in a Freshman courses.

Lance's Solution--- Stop giving out grades. See here

----------------------------------------------
5) Bill R said that we will always need humans who are better at judgment.

Maybe a computer has better judgment. I blogged on this here

 --------------------------------------------------
6) I asked two AI people at lunch if the AI revolution is just because of faster computers and hence is somewhat limited. They both said YES.

SO- could it be that we are worrying about nothing?

This also may be an issue with academia: if we hire lots of AI people because it's a hot area, it may cool off soon. Actually I thought the same thing about Quantum Computing, but I was wrong there.

----------------------------------------------
7) LLM's use LOTS of energy. If you get to ask one How do we solve global warming? they might say

First step: Turn me off!

----------------------------------------------
8) Scott did a great  blog post about the ways AI could go. See here.

--------------------------------
9) I recently emailed Lance a math question.

He emailed me the answer 5 minutes later.

I emailed that I was impressed

He emailed that he just asked  Chat GPT. He had not meant to fool me, he just assumed I would assume that. Like if you asked me what 13498*11991 was and I answered quickly you would assume I used a calculator. And if there is a complicated word in this post that is spelled correctly then you would assume I used spellcheck - and there is no embarrassment in that.

--------------------------------
10) If a painting is done with AI does any human get credit for it?

I always thought that people who forge paintings that look JUST LIKE (say) a van Gogh should be able to be honest about what they do and get good money since it LOOKS like a van Gogh who cares that it is NOT a van Gogh.  Same with AI- we should not care that a human was not involved.

IF an AI finds a cure for cancer, Great!

If an AI can write a TV series better than the human writers, Great!

--------------------------------------------------------
11) AI will force us to make moral choices. Here is a horrifying scenario:

Alice buys a self-driving car and is given some options, essentially the trolley problem:

If your car has to choose who to run over, what do you choose?

You have the option of picking by race, gender, age, who is better dressed, anything you want.

-------------------------------------------------------
12) Climate Change has become a political problem in that

Democrats think it IS a problem
Rep think it is NOT a problem

Which is a shame since free-market solutions that would normally appeal to Reps are not being done (e.g., a Carbon Tax). Indeed, we are doing the opposite- some states impose a tax on Hybrid cars


SO- how will AI go with politics? Scenarios

a) Dems are for regulation, Reps are against it. Elon Musk worries about AI and he is a powerful Rep so this might not happen.  Then again, he supports Reps, many of whom have BANNED E-cars in their states

b) AI-doomsayers want more regulation, AI-awesomers do not, and this cuts across party lines.

c) We will ignore the issue until it's too late.

If I was a betting man ...

----------------------------------------------------------
13) International cooperation on being careful with AI. Good luck with that.

My cynical view: International Treaties only work when there is nothing at stake

The Chem Weapons ban works because they are hard to use anyway.

The treaty on exploring Antarctica was working until people found stuff there they wanted. It is now falling apart

 •  0 comments  •  flag
Share on Twitter
Published on December 15, 2024 05:50

December 11, 2024

It's Time to Stop Using Grades

We use grades to evaluate students and motivate them to learn. That works as long as grades remain a reasonably good measure of how well the student understands the material in a class. But Goodhart's law, "When a measure becomes a target, it ceases to be a good measure," cannot escape even this most basic of academic measurements. Grades become irrelevant or even worse, counterproductive, as chasing grades may undermine a student's ability to master the material. So perhaps it is time to retire the measure.

Grading became a weaker measure due to grade inflation and academic dishonesty. Let's do a short dive into both of these areas.

The average grade has increased about a full grade level since I went to college in the '80s, and now more than half of all grades given are A's. As college tuition increased, students started thinking of college more transactionally, expecting more from their college experience while putting less effort into classes. Administrators put more weight on student course surveys for faculty evaluation, and the easiest way to improve scores is to give higher grades. And repeat.

If everyone gets an A, no one gets an A. It just becomes harder to distinguish the strong students from the merely good.

Academic dishonesty goes back to the beginning of academics but has advanced dramatically with technology. In my fraternity, we had filing cabinets full of old homework and exams ostensibly to use as study guides. However, if a professor reused questions from year to year, one could gain an unfair advantage.

With the growth of the Internet, Chegg, and more recently large-language models, those looking for an edge never had it so good. ChatGPT-4o1 can answer nearly any undergraduate exam question in any field—it even got an easy A when I tested it with one of my undergraduate theory of computing finals.

AI becomes like steroids: those who don't use it find themselves at a disadvantage. If a pretty good student sees their peers using LLMs, they'll start using them as well, initially just as a learning aid. But there's a very fine line between using AI as a study guide and using AI to give you the answers. Many fall down a slippery slope, and this starts to undermine the mastery that comes with tackling problems on your own.

We can try and counter all this by returning to harsher grading and more heavily weighting in-person, no-tech exams, but these approaches cause other problems. Already we see companies and graduate schools devalue grades and focus on projects and research instead.

So let's acknowledge this endgame and just eliminate grades, maybe keeping only Pass and Fail for those who don't even show up. The ones who want to master the material can focus on doing so. Others can concentrate on working on projects. Still others can earn their way to a degree with little effort but also with little reward.

 •  0 comments  •  flag
Share on Twitter
Published on December 11, 2024 07:29

December 8, 2024

My comments on Lance's Favorite Theorems

In Lance's last post (see here) he listed his favorite theorems from 1965 to 2024.
There are roughly 60 Theorems. I mostly agree with his choices and omissions. I will point out where I don't.

I could make a comment on every single entry; however, that would be madness! Madness I say!

Instead, here are some random thoughts.  (Is that Random as in Random Restriction or Random as in Random Access Machine?  I leave that an exercise for the reader.)

1) 1965-1974

MANY BASIC RESULT WITH EASY PROOFS.
EXAMPLE:
The Cook-Levin Theorem. P, NP, and SAT is NPC

ANSWERS A QUESTION:
Ladner: Answers a very important question: YES, if P NE NP there are
intermediary sets. The set is constructed for the sole point of not being in P or NPC. Graph Isom and Factoring are natural candidates for being intermediary.

SEEMS TO HAVE BEEN FORGOTTEN:
Blum: Abstract Complexity Theory. Seems to not be taught anymore. I give a corollary for our young readers who might not know it:

There is a decidable set A such that If A is in DTIME(T(n)) then A is in DTIME((log T(n)). Hence A cannot be assigned a complexity. (The set A is constructed for the sole point of having this property. There are no natural examples or even candidates for sets that have this behavior.)

I might disagree with putting this on the list. It has not stood the test of time; however, it still seems important. 


II) 1975-1984.

This may be my favorite decade on the list; however, its been said
that everyone thinks that the best music was when they were a teenager.

EXAMPLES:

INTERESTING THEOREMS WITH INTERESTING PROOFS:
Everything on the list is in this category but I pick out three:

Baker-Gill-Solovay Oracles: The basic paper for my thesis work.

Furst-Saxe-Sipser Parity is not in constant depth.  A meaningful lower bound on a natural problem! Motivated by an Oracle open question (Sep PH from PSPACE) however, circuit complexity quickly became a field onto itself.  What is more interesting the circuit lower bound or the oracle-corollary? I would vote for the circuit lower bound. The issue was discussed here.

Valiant-Permanent. Perm is #P-complete is a theorem I've learned and forgotten many times. Scott much later gave a proof that may be more intuitive for some people (I am not one of them) see here.  The only theorem I've learned-and-forgotten more is the Hales-Jewitt Theorem.

 
III) 1985-1994.

A Decade of Surprises!

Barrington: Branching programs more powerful than we thought!

Toda: #P is more powerful then we thought!

LFKN: IP is more powerful than we thought! Bonus: used non-rel methods! (This result was not on the list but would have been if anybody except L or F or K or N had written the list.)

Nisan: Randomization is less powerful than we thought!

Lance did not have Factoring in Quantum P on the list for 1985-1994. It came out in 1994 towards the end of the year so it ended up missing both the 1985-1994 list and the 1995-2004 list. Reminds me of the Betty White awards, see here.  I do not think we disagree on the importance and merit of the result, though we disagree about altering the past- I would have included it in the post he did recently and explain that it was a late add to an old list.

IV) 1995-2004.

In 1990 a theorist told me that he could teach EVERYTHING known in complexity theory in a year-long graduate course.  Even then, that was not quite right, and may have really meant he could teach everything he thought was important. By 1995 this was no longer true. The PCP result alone would take a few months.

Theory begins to get really hard. Most of the papers before 1992 I have read and understood. Most of the papers after 1992 I have not, though I know what's in them. Sometimes not even that!

PCP: Many problems are hard to approximate.  Good to know, bad that its true. Proofs are hard!

Raz's Parallel Repetition: Really useful in later non-approx results (my favorite: Set Cover Lower bounds, see this survey here) but also Really Hard to read.

Most of the other papers are also hard and important.

AKS- Primality in P- not that hard. Indeed, surprising that it was not proven until 2002.

Lance did not include the work on Natural Proofs by Razborov and Rudich. He says why in a blog post here. I disagree with him- I would have put it in a top theorems list.

VI) 2005-2014.

Some upper bounds and some lower bounds. By now it was hard to have a surprising result since our intuitions were not so firm as to be surprised. (There is one exception in the 2015-2024 list.)

Reingold-Undirected Connectivity in Log Space: great result! I wish the proof was easier. I think people thought this would be true.

Lots of interesting lower bounds: Nash Equilibrium, Unique Game Conj, new PCP proof. None of which was surprising, though perhaps that we could proof things about these concepts is surprising.

JJUW-QIP=PSPACE. Really! I was shocked to find out that was true. No, I wasn't. I didn't understand QIP well enough. Was this surprising or not? Was the fact that this could be proven surprising or not?


VII) 2015-2024.

No real theme here though they all have hard proofs. I discuss a few.

Babai-Graph Isomorphism is the only result in this decade that I can explain to Darling. And she has a Masters Degree in SE so she knows stuff. (I recently told her the result that for all 2-colorings of R^6 there is a mono unit square  (see here). She was unimpressed.)

BZ- Dichotomy: Excellent result and explains the lack of natural intermediary problems.

CZ-Extracting Ramsey Graphs: An example of TCS helping to prove a result in math, though it also shows that the border between the two is thin. Obviously a favorite of mine.

JNVWY- MIP* = RE. This surprised people, including me.

 •  0 comments  •  flag
Share on Twitter
Published on December 08, 2024 14:55

December 4, 2024

Favorite Theorems: The Complete List

Now in one place all of my sixty favorite theorems from the six decades of computational complexity (1965-2024).

2015-2024Graph Isomorphism (Babai)Sensitivity (Huang)Quantum Provers (Ji-Natarajan-Vidick-Wright-Yuen)Dichotomy (Bulatov, Zhuk)Algebraic Circuits (Limaye-Srinivasan-Tavenas)Extracting Ramsey Graphs (Chattopadhyay-Zuckerman)Random Oracles (Håstad-Rossman-Servedio-Tan)Parity Games (Calude-Jain-Khoussainov-Li-Stephan)Gradient Descent (Fearnley-Goldberg-Hollender-Savani)Learning from Natural Proofs (Carmosino-Impagliazzo-Kabanets-Kolokolova)

2005-2014 Undirected Connectivity in Log Space (Reingold)Optimal Inapproximability for Max-Cut (Khot-Kindler-Mossel-O'Donnell, Mossel-O'Donnell-Oleszkiewicz)Limitations of linear programming (Fiorini-Massar-Pokutta-Tiwary-de Wolf, Rothvoß)Complexity of Nash Equilibrium (Daskalakis-Goldberg-Papadimitriou, Chen-Deng)Combinatorial Proof of PCP Theorem (Dinur)Constructive Proof of the Lovász Local Lemma (Moser)Polylogarithmic independence fools AC0 (Braverman)QIP = PSPACE (Jain-Ji-Upadhyay-Watrous)Lower Bounds on Multilinear Formulas (Raz)NEXP not in ACC0 (Williams) 1995-2004 Derandomization (Impagliazzo-Wigderson)Primality (Agrawal-Kayal-Saxena)Probabilistically Checkable Proofs (Håstad)Connections (Trevisan)Superlinear Bounds on Branching Programs (Ajtai)Parallel Repetition (Raz)List Decoding (Sudan)Learning Circuits (Bshouty-Cleve-Gavaldà-Kannan-Tamon)Quantum Lower Bounds (Ambainis)Derandomizing Space (Saks-Zhou)1985-1994
To mark my first decade in computational complexity during my pre-blog days, I chose my first set of favorite theorems from that time period for an invited talk and paper (PDF) at the 1994 Foundations of Software Technology and Theoretical Computer Science (FST&TCS) conference in Madras (now Chennai), India. The links below go to the papers directly, except for Szelepcsényi’s, which I can't find online.Branching Programs (Barrington)Bounded-Depth Circuits (Håstad)Monotone Circuits (Razborov)Nondeterministic Space (Immerman, Szelepcsényi)Cryptographic Assumptions (Impagliazzo-Levin-Luby)Isomorphism Conjecture (Ogihara-Watanabe)Simulating Randomness (Nisan)Counting Complexity (Toda)Counting Classes (Beigel-Reingold-Spielman)Interactive Proof Systems (Arora-Lund-Motwani-Sudan-Szegedy)1975-1984 (From 2006)Alternation (Chandra-Kozen-Stockmeyer)Relativization (Baker-Gill-Solovay)Small Sets (Mahaney)Primality Algorithms (Solovay-Strassen, Miller, Rabin)Probabilistic Complexity (Gill, Sipser)The Permanent (Valiant)Unique Witnesses (Valiant-Vazirani)Parity (Furst-Saxe-Sipser, Ajtai)The Yao Principle (Yao)Nonuniform Complexity (Karp-Lipton)

1965-1974 (From 2005)

The Seminal Paper (Hartmanis-Stearns)Efficient Computation (Cobham-Edmonds)NP-Completeness (Cook)Combinatorial NP-Complete Problems (Karp)The Polynomial-Time Hierarchy (Meyer-Stockmeyer)P = NP for Space (Savitch)Abstract Complexity (Blum)NP-Incomplete Sets (Ladner)Logical Characterization of NP (Fagin)Defining the Future (Cook-Reckhow)
Will I do this again in ten years when I'm 70? Come back in 2034 and find out.
 •  0 comments  •  flag
Share on Twitter
Published on December 04, 2024 05:58

December 1, 2024

Conway's Trick for Divisibility. Asking its complexity is an odd question.

 (I got this material from a nice article by Arthur Benjamin here.)

 Conway suggested the following trick to determine if a number is divisible by each of the following: 

2,3,5,7,11,17,19,31

Note that

\( 152=2^3\times 19\)

\(153 =3^2 \times 17\)

\(154=2  \times 7 \times 11\)

\(155=5 \times 31\)

\(156=2^2  \times 13 \)

Here is the Div trick:

a) Input N

b) Divide N by 150 and note the remainder. So

 N=150q+r

r=N-150q 

Subtract q from r a few times: 

Note that

r-q = N-150q-q = N-151q

r-2q=N-152q

AH HA!- if 19 divides r-2q then 19 divides N. So divide r-2q by 19. (Note that r-2q is much smaller than N. Smaller enough to make this technique feasible? That is the question!)

r-3q=N-153q.

AH HA!- if 17 divides r-3q then 17 divides N. So Divide r-3q by 17.

r-4q=N-154q

AH HA- if 11 divides r-4q then 7 divides N. So Divide r-4q by7.

r-5q=N-155q

AH HA- if 31 divides r-5q then 31 divides N. So Divide r-5q by 31.

r-6q=N-156q

AH HA- if 13 divides r-6q then 13 divides N. So Divide r-6q by 13. 

Complexity with 1 division, 6 subtractions and 6 divisions of small numbers (r\le 150 and q\le N/150)

you find out the divisibility by 7,13,17,19,31.  For 2,3,5,11 there are well known tricks to use. OR you can test those as well by doing (for example) dividing r-4q=r-154 by 11.

Some Points

1) Is this method faster than just dividing N by the numbers (and using tricks for 2,3,5,11)? You would need to get into addition being faster than division, and look at the size of the numbers.

2) Is this method practical? For hand calculation YES. For computers it would be easy to code up but the main question of this post: is it better than just dividing N by numbers.

3) Are there larger runs of numbers that pick up more divisors? Yes. We present one. The order will look funny but we explain it later.

\(2000=2^4 \times 5^3 \) (you could skip this one, though dividing by 2000 is easier than by 2001)

\(2001=23\times 29\times 3\) (would divide N-2q by both 23 and 29)

\(2002=7\times 11\times 13\times 2\)

\(1998=37\times 54\)

\(2006=17\times 29\times 2\)

\(2010=67\times 30\)

\(2014=19\times 53\times 2\)

\(2013=61\times 33\)

\(2015=31\times 65\)

\(2009=41\times 49\)

\(2021=43\times 47\)

The order was suggested by Conway so that algorithm at every step adds or subtracts one of q, 2q, 4q, 6q, 8q, 12q. So after you get q you can compute these values. 

I leave it to the reader to count the number of divisions, subtractions, and size of the numbers involved.

4) For cracking RSA this technique is useless since RSA uses numbers of the form pq where p and q are large primes. For factoring randomly generated numbers I would be curious if this method is better than just dividing by numbers.

5) Project: find other sequences like those above that cover more prime factors.



 •  0 comments  •  flag
Share on Twitter
Published on December 01, 2024 12:26

November 25, 2024

We Will All Write Like AI

Will our writing all converge to a generic AI style? 
Let's take a quick detour into LaTeX. Back in the late '80s, before LaTeX was the standard, there was TeX—a system with no default formatting, which meant everyone had their own unique style for papers. Then LaTeX arrived, and suddenly all our papers looked polished and professional. But the catch was, LaTeX only got you about 80% of the way there. The original manual even mentioned that you needed to add some extra TeX commands to really finish the job. Most of us didn’t bother, though, and soon all our papers had that same uniform look. It was efficient, and it was good enough. Microsoft Word ended up doing the same thing for everyone else—you could tweak it to be different, sure, but most people didn’t. It turns out most of us are just fine with "good enough."
I generally don't like to use large language models to write for me. But I do use them to refine my work, to catch errors or suggest improvements. The thing is, AI likes to take what I write and make it sound smoother, and I often think, "Wow, that really does sound better." But here’s the tricky part: it might be better, but it’s not mine. It’s the AI's voice. And still, if the stakes aren’t too high, sometimes I let the AI’s version slide. That’s the start of a slippery slope. Before you know it, we’re all letting AI make our writing a bit more generic, a bit more uniform. And eventually, we end up writing to match the AI’s preferred style.
For this blog post, I didn’t resist at all. Could you tell this is ChatGPT's style?
 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2024 04:59

Lance Fortnow's Blog

Lance Fortnow
Lance Fortnow isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Lance Fortnow's blog with rss.