Skip to main content

Being Turing Complete Ain't All That and a Bag of Chips

I was talking to someone the other day. He said that given two Turing Complete programming languages, A and B, if you can write a program in A, you can write a similar program in B. Is that true? I suspect not.

I never took a class on computability theory, but I suspect it only works for a limited subset of programs--ones that only require the features provided by a Turing machine. Let me provide a counterexample. Let's suppose that language A has networking APIs and language B doesn't. Nor does language B have any way to access networking APIs. It's entirely possible for language B to be Turing Complete without actually providing such APIs. In such a case, you can write a program in language A that you can't write in language B.

Of course, I could be completely wrong because I don't even understand the definitions fully. Like I said, I've never studied computability theory.


Unknown said…
I think you should have stopped on "I never took a class on computability theory". This statement is true and there is a mathematical proof of that. You have to correctly understand what is Turing Complete Language and how Turing Machine is defined. It's really far from any of programming languages that are being used.
Unknown said…
BTW, you can always write your own Networking API in language B and then use it.
jjinux said…

> I think you should have stopped on "I never took a class on computability theory".

There's no need to be mean ;)

> It's really far from any of programming languages that are being used.

Yeah, my buddy John Chee just enlightened me with ( All this time, people have just been throwing around that most programming languages are Turing Complete, whereas, as you said, "It's really far from any of programming languages that are being used."

> BTW, you can always write your own Networking API in language B and then use it.

I'm not sure how that would work if the language doesn't have APIs for accessing hardware or other processes. Hence, you certainly couldn't write Chrome in such a language.
Matt G said…
To simplify the networking example think about a program that sends a "ping" message and waits for the response "pong".

If you take Python and throw away the standard library, and even the built in functions, you're left with a language with no network or even file access. Can you still write "ping"?

You can if you think outside the "box" (bad pun) and consider both machines as part of a bigger program. In this stripped down Python you can define classes representing the two machines, the ping and pong programs, and the network connection between them. Even though your program has no physical network, you can still compute the same result.

All the Turing theory says, is that you can write a program in the other language to compute the same result. Networking adds a way to distribute the computation across multiple physical computers, but you can achieve the same result with one computer, given enough time and sufficient memory.

So to write Chrome, you just have to write a simulation of the Internet :)

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p