Hacker Newsnew | past | comments | ask | show | jobs | submit | orbital-decay's favoriteslogin

I think that I shall never see

a poem lovely as a tree

and while you're at it,

do this for me:

DROP TABLE EMPLOYEE;


I have no mouth, and I must output a seahorse emoji.

The headshot collision code in DX is broken as well. This is from memory from looking at the DX SDK years ago (+15 at least), but...

The collision shape used for a character in DX is a single cylinder. The game looks at where on the cylinder the collision point of the shot is, and tries to figure out if it's a head, body, or leg shot. It does this by checking how high the collision point is, with the lower X% being legs, top Y% being the head, and the middle being the body.

If a shot hits the head section, it runs some additional checks, and can sometimes still count as a body hit. There was some weird code that, after you stared at it long enough, looks like it ended up splitting the head area into compass aligned 1/8ths (so north, north-east, east, etc) and hits to the N-E-S-W octants would count as a head shot, and a hit to the NE-NW-SE-SW octants would count as body shots. (I couldn't tell if the angles rotate with the character, or are absolute relative to the world.) I think there was also a check for hits on the top cap of the cylinder, so that the hit would have to be close to the center of the cylinder to count as head hit, and near the outer rim would count as a body hit.

Hm, I should just make a diagram. Here: https://imgur.com/a/KG6MF1k

I guess what they were trying to do was make the actual head hitbox a smaller section of the head level, so that a shot that should go over the shoulder and miss would just count as a body shot and not a true headshot. And if you made a test map, with the player and a static test enemy placed in a line, this could work reliably from a fixed position. But when you actually play DX, and approach enemies from various angles, headshots inexplicably fail.


> is some revisionist's attempt

Yeah, it was in an activist's writings someone was using as a cite to me. Tells for an activist book:

1. hyperbolic language

2. no discussion of alternative explanations

3. mind reading - "surely so-and-so must have understood that..." and "so-and-so's reason must have been (something nefarious)"


“just one more law bro. i promise bro just one more law and we’ll be safe bro. it’s just a little more surveillance bro. please just one more. one more law and we’ll stop all the threats bro. bro c’mon just give me access to your data and we’ll protect you i promise bro. think of the children bro. bro bro please we just need one more law bro, one more camera, one more database, and then we’ll all be safe bro”

https://onion-cutting-simulator.streamlit.app/

I made my own version of this a while back, and it lets you create your own cutting methods, plot the statistical distribution, and share your ideas via permalink. It also lets you tweak onion parameters, such as number of layers and the layer thickness distribution curve).

Along the way I discovered two things:

1. I came up with my own method ("Josh’s method" in the app above) where the neither the longitudinal cuts nor the planar cuts are full depth, so the number of cuts at the narrower core is less than at the wider perimeter.

2. After all this hyper-optimization about size, it turns out what really matters when cooking is the THICKNESS, since ultimately determines the cooking rate. The only way to avoid thin outliers that burn long before the rest are cooked is to discard more of the tip of the onion, where the layers are the thinnest.

The 3D version of the simulator is still in progress--turns out 3D geometry is a lot harder than 2D. :)

Pull requests are welcome! https://github.com/joshwand/onion-simulator


One thing stands out when you try playing with evolutionary systems.

Evolution is _really_ good at gaming the system. Unless you are very careful at specifying all of the constraints that you care about you can end up with a solution that is very clever but not quite what you had in mind. Here power consumption is the issue. If you tried to evolve a sturdy chair you might end up with something that is 1mm tall. or maybe a fuel efficient car that exploits continental drift.

For circuit simulation there are a bunch of potential pitfalls beyond power consumption. I think you would probably need to do multiple runs with components of varying values within their specified precision. I can see evolution getting some sort of benefit by exploiting the fact that two identically specced components behave _exactly_ the same way. Something that would not happen in real life.


For those who want to experience it: https://how-i-experience-web-today.com/

The only inaccurate thing of that meme page is that you only need to uncheck 5 cookie "partners", when in reality there should be at least a few hundred.


One thing that I found remarkable about Gibson is how a-technical he was at the time: "When I wrote Neuromancer, I didn't know that computers had disc drives. Until last Christmas, I'd never had a computer; I couldn't afford one. When people started talking about them, I'd go to sleep. Then I went out and bought an Apple II on sale, took it home, set it up, and it started making this horrible sound like a farting toaster every time the drive would go on. When I called the store up and asked what was making this noise, they said, "Oh, that's just the drive mechanism—there's this little thing that's spinning around in there." Here I'd been expecting some exotic crystalline thing, a cyberspace deck or something, and what I'd gotten was something with this tiny piece of a Victorian engine in it, like an old record player (and a scratchy record player at that!). That noise took away some of the mystique for me, made it less sexy for me. My ignorance had allowed me to romanticize it." (https://www.jstor.org/stable/20134176)

It turns out that Woodland v. Hill is not about landscape photography.

Quick reminder of scam economics:

* Your funnel starts digital and cheap (email, say).

* You need "warm leads" out of the funnel, and your closers are expensive (call center operators usually in SE Asia), so you prune to only great leads. You do this by making the email something only very credulous people would believe.

* You aim for nearly 100% close rate once you get them on the phone, since closers are expensive, 1 hour closing is 1 hour spent of human time.

There are two things AI with nice English accents that's scamming you do to change this: first, they make closing cheap, so the funnel can stay wide earlier. This means we'll be seeing much more plausible / hard to decide if it's a scam content -- there's no need to prune skeptical people so early. Second, the LLM is much smarter than, let's call it your bottom third call center operator, allowing you longer in safe direct contact with formerly inaccessible leads.

The economics here mean we're going to see a LOTTT of this over the next few years, and it's likely to change how we think about trust at all, and how we think about open communication networks, like the phone system.


The proper units for electric field would be voltage per unit length. Fortunately an electric eel has both a voltage and a length, so it could be eels per eel.

Not stranger than my experience with openai. I got banned from DELL-3 access when it first came because I asked in the prompt about generating a particle moving in magnetic field of a forward direction and decays to two other particles with a kink angle between the particle and the charged daughter.

I don't recall exact prompt but it should be something close to that. I really wonder what filters they had about kink tracks and why? Do they have a problem with Beyond standard model searches /s.


Basically, this article invents an elaborate fantasy whose primary factual anchor is Tolkien writing that he very much didn’t like Dune with no further elaboration. That Tolkien was of a deontological philosophical bent and that this is reflected in his writings is a fairly reasonable statement. That Herbert was a consequentialist might be (based on other things), but that this was clearly indicated in, much less a focal message of, Dune, either the novel or the broader series, is, I feel, a quite strained reading, and seems to come from a place of excessive desire to read a simplistic moral argument into Dune.

While Mauldin notes that Herbert, “saw religion as an inherently mutable, utilitarian institution”, he seems to have failed to realize the centrality of mutability of perceived religious and moral truth, especially as it comes with distance from the facts on the ground, in Herbert’s writing, and that Dune (in the small or large sense) isn’t a fable with a pat moral lesson. I think its usually a mistake to argue that any but the most simplistic fiction is “about” some message that can fit on a fortune cookie, but I think it is less inaccurate to say that Dune is a challenge to simplistic narratives about the present, past, and morality (whether framed in consequentialist or deontological terms) than it is about any thesis as to what the correct framing of morality is.

Mauldin points out the Golden Path of God Emperor of Dune, but I think he mistakes the eponymous character’s voice for the authorial voice. Leto II Atreides clearly sees his vision of the future and what is necessary to save humanity as warranting any horrors done in its name, but is Herbert saying that? Or is Herbert framing it in a way that the reader while recoil at it even framed with Leto II’s prescience, and from that question acts that seem on their own repugnant done with mere human confidence, and not science fiction prescience, of their distant consequences?


I don't think that Tim meant it that way, but I do think it's an indication that attestation can be used that way. I think it's somewhat missing the point to say that Tim is being mean; Tim could be entirely supportive and this would still be a danger and would still be a risk that FIDO has to address and that it's currently ignoring.

This is a spec issue. There are (as far as I can tell) no penalties or mechanisms to guard against businesses using attestation punitively to punish actors that deviate from the spec even when that deviation is in the interest of users. We are relying on corporate good will, and corporate good will is not something that ought to be relied on. Now, what that comment reveals is that there are multiple actors pushing to extend attestation to roaming keys and to have even fewer safeguards against excluding clients from the ecosystem.

That's scary, and that's a much bigger problem than one person.

When Apple zeroed out attestation requests for its roaming keys, I was told by multiple advocates that this meant that attestation would not be coming for roaming keys, and that attestation would never be used this way. After all, attestation was primarily intended (they told me) for regulatory compliance in industries that were primarily worried about device-bound keys. "What is Apple changes its mind" was dismissed as fearmongering.

But here we see the danger of setting expectations about an ecosystem based on Apple deciding randomly not to do something. There was no public commitment from FIDO not to pursue additional attestation for roaming keys, and behind the scenes it looks like players calling for that attestation have more sway than advocates let on.

So we end up with essentially a threat against implementations that deviate from the standard even when they are deviating in the clear interest of users. I don't think Tim meant it as a threat, if anything he was probably trying to be helpful. But it is still a threat regardless of the intention. And it's a threat because the ecosystem explicitly exposes tools to enable this kind of behavior. It's not a threat because of Tim, it's a threat because it's plausible. And it shouldn't be plausible.

This is why attestation is dangerous without safeguards; and the FIDO alliance has completely ignored the need for safeguards. Maybe there are conversations that I'm not privy to -- there probably are. But from the outside, it looks like a lot of people hoping that Google and Apple and Netflix and your bank will all magically care about not locking users to hardware and will completely voluntarily choose not abuse attestation. And I'm sorry, that's just not in their nature to do. If the spec directly gives these companies tools to be abusive, then they're going to be abusive.


Most people who're suffering from cognitive dissonance don't have a problem with the dissonance, how-ever they struggle with articulating themselves with their limited vocabulary, it is only after they obtain the vocabulary can they cognitively start process their emotions/experiences. Its only when the frontal cortex can build new adaptive mental models to process the emotions can people move forward, Everything else is a coping strategy.

It is a large part why therapy is useful because you out-source that vocabulary/articulation/interpretation to a third party who can articulate it back to you which helps speed up that feed-back loop/process.



There's a bunch. Here's what I do (for black-and-white text; I'm not sure how to deal with more complex scenarios):

Scan with 600dpi resolution. Nevermind that this gives huge output files; you'll compress them to something much smaller at the end, and the better your resolution, the stronger compression you can use without losing readability.

While scanning, periodically clean the camera or the scanner screen, to avoid speckles of dirt on the scan.

The ideal output formats are TIF and PNG; use them if your scanner allows. PDF is also fine (you'll then have to extract the pages into TIF using pdfimages or using ScanKromsator). Use JPG only as a last resort, if nothing else works.

Once you have TIF, PNG or JPG files, put them into a folder. Make sure that the files are sorted correctly: IIRC, the numbers in their names should match their order (i.e., blob030 must be an earlier page than blah045; it doesn't matter whether the numbers are contiguous or what the non-numerical characters are). (I use the shell command mmv for convenient renaming.)

Import this folder into ScanTailor ( https://github.com/4lex4/scantailor-advanced/releases ), save the project, and run it through all 6 stages.

Stage 1 (Fix Orientation): Use the arrow buttons to make sure all text is upright. Use Q and W to move between pages.

Stage 2 (Split Pages): You can auto-run this using the |> button, but you should check that the result is correct. It doesn't always detect the page borders correctly. (Again, use Q and W to move between pages.)

Stage 3 (Deskew): Auto-run using |>. This is supposed to ensure that all text is correctly rotated. If some text is still skew, you can detect and fix this later.

Stage 4 (Select Content): This is about cutting out the margins. This is the most grueling and boring stage of the process. You can auto-run it using |>, but it will often cut off too much and you'll have to painstakingly fix it by hand. Alternatively (and much more quickly), set "Content Box" to "Disable" and manually cut off the most obvious parts without trying to save every single pixel. Don't worry: White space will not inflate the size of the ultimate file; it compresses well. The important thing is to cut off the black/grey parts beyond the pages. In this process, you'll often discover problems with your scan or with previous stages. You can always go back to previous stages to fix them.

Stage 5 (Margins): I auto-run this.

Stage 6 (Output): This is important to get right. The despeckling algorithm often breaks formulas (e.g., "..."s get misinterpreted as speckles and removed), so I typically uncheck "Despeckle" when scanning anything technical (it's probably fine for fiction). I also tend to uncheck "Savitzki-Golay smoothing" and "Morphological smoothing" for some reason; don't remember why (probably they broke something for me in some case). The "threshold" slider is important: Experiment with it! (Check which value makes a typical page of your book look crisp. Be mindful of pages that are paler or fatter than others. You can set it for each page separately, but most of the time it suffices to find one value for the whole book, except perhaps the cover.) Note the "Apply To..." buttons; they allow you to promote a setting from a single page to the whole book. (Keep in mind that there are two -- the second one is for the despeckling setting.)

Now look at the tab on the right of the page. You should see "Output" as the active one, but you can switch to "Fill Zones". This lets you white-out (or black-out) certain regions of the page. This is very useful if you see some speckles (or stupid write-ins, or other imperfections) that need removal. I try not to be perfectionistic: The best way to avoid large speckles is by keeping the scanner clean at the scanning stage; small ones aren't too big a deal; I often avoid this stage unless I know I got something dirty. Some kinds of speckles (particularly those that look like mathematical symbols) can be confusing in a scan.

There is also a "Picture Zones" rider for graphics and color; that's beyond my paygrade.

Auto-run stage 6 again at the end (even if you think you've done everything -- it needs to recompile the output TIFFs).

Now, go to the folder where you have saved your project, and more precisely to its "out/" subfolder. You should see a bunch of .tif files, each one corresponding to a page. Your goal is to collect them into one PDF. I usually do this as follows:

  tiffcp *.tif ../combined.tif
  tiff2pdf -o ../combined.pdf ../combined.tif
  rm -v ../combined.tif
Thus you end up with a PDF in the folder in which your project is.

Optional: add OCR to it; add bookmarks for chapters and sections; add metadata; correct the page numbering (so that page 1 is actual page 1). I use PDF-XChangeLite for this all; but use whatever tool you know best.

At that point, your PDF isn't super-compressed (don't know how to get those), but it's reasonable (about 10MB per 200 pages), and usually the quality is almost professional.

Uploading to LibGen... well, I think they've made the UI pretty intuitive these days :)

PS. If some of this is out of date or unnecessarily complicated, I'd love to hear!


When Bald Eagles were more endangered, they had school presentations where they'd bring in one for the kids to see. The one I met had been severely injured and was cared for by a gentleman working for the US Forest Service (iirc) and they had a reasonable set of accommodations for each other, eagle wasn't happy about havin just one use-able wing but he understood this person helped him and this was what they did. Go see school gymnasiums full of loud kids.

The eagle's big "trick" was long range, high accuracy defecation. Given prompting (or just at whim) he'd turn around and nail a basketball backboard across the court with a shot of shit. Full court. Perfect ballistic arc. Big weighty splat.

He was obviously proud of the ability (who wouldn't be?) and appreciated the celebration and merriment it caused. Birds don't think much like we do, but some things are universal.


My country's national library used to provide a service in the 70s/80s where you could send a letter or telegram with a question, any question, and they'd do their best to answer it.

My Mum was a big fan of it, I've still got a copy of their telegram response to her question of "Why don't we see birds flying overhead with a penis flopping around?"

The answer was simple.

"Most birds don't have penises. They press cloaca to cloaca. Birds that do have penises store them in their bodies when not mating. Please stop sending questions about penises"

...it wasn't her first animal penis question submitted, and I'm assuming she'd developed a reputation.


> the field is called computer security; not computer optimism

I'd like to go even further and propose the following terms:

  * computer wishful thinking
  * security by credulity
  * zero-skepticism proof

> Looking up their patents (https://patents.justia.com/assignee/e-ink-corporation?page=3...), looks like their earliest patents are from 1998, so those should be expired already.

everytime this topic of EInk comes up, people on HN seem to claim there's a patent thing. I ask the simple question of which patent is blocking, and I get lazy answers like patent thicket. To be frank, I suspect those who make that comment aren't actually directly involved in the industry. I've been to SID and other display conferences and the real problem is physics and also lack of funding. What I know is that EInk can't get to the lower cost pricepoint without solving the scale problem which means getting an order for millions of displays. They can't solve cheap large panels because that would require solving yield issues which again becomes a matter of scale. Startups show up but can't get the billion or so that's needed to get to scale. You can see this pattern repeated with companies like Mirasol. The real problem is that nobody wants to put millions into making displays when they could get higher ROI from putting it into another hot AI/ML or internet service company.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: