S/MIME is about the mail (content) itself, not the transport. For the transport part there are things like (START)TLS and MTA-STS. With S/MIME you include your certificate in the mail and can either sign the mail with a signature (with your private key, others can verify it using your public key from the certificate) or encrypt the mail (with the receiver's public key, so only he can decrypt it using his private key). Certificate trust is determined normally via the CA chain and trusted CAs.
Funny timing. Just yesterday I was looking for an easy Windows tool to do a simple stress-test on a website (legally ofc). A requirement of mine was to just give it the root URL and the tool should discover the rest automatically (staying on the same domain). Also, parameters like parallelism had to be easily manageable.
After trying some crawlers / copiers and other tools I went back to a simple one I already knew from saving static copies of websites in the past: HTTrack. It fit the bill perfectly!
You can add the root URL, set it to "scan only" (so it doesn't download everything) and tweak the settings like connections and speed (and even change some settings mid-run, save settings, pause, ...). So thanks xroche for HTTrack! :)
It didn't need to in my simple case. k6 doesn't do crawling / auto-discovering, from what I can tell - I just wanted to give the tool one URL and press start.
I also use HeidiSQL almost daily. Besides MySQL / MariaDB it also can connect to MSSQL and Postgres.
I recently had to import a CSV with some million rows into MariaDB. Neither HeidiSQL nor DBeaver could do it (tried various settings). IntelliJ worked like a charm.
A useful feature! I have this available to me as part of Refined Hacker News [1], which does a bit more. But if you only want the user popup, a slim extension is nice.
The type of this is `time.Duration` (or int64 internally), not `time.Second` (which is a const with a value).
I agree, though, that this is not quite sound, because it can be misused, as shown above with `time.Sleep(delaySecs * time.Second)`.
In Kotlin you can do `1.seconds + 1.minutes` but not `1.seconds * 1.minutes` (compilation error), which I quite like. Here is a playground link: https://pl.kotl.in/YZLu97AY8
The issue is that not every team remembers to test incognito from time-to-time.
Those popups are all cookie-hidden if the cookies are set. Easy for an engineer working regularly on the product to accrete the cookies necessary to hide most of them over time.
(Concretely in this case, I bet 99% of the engineers on that site have forgotten GDPR is a thing, especially since their compliance is being handled by third-party provider TrustArc. Easy for a frequent visitor to forget that every new visitor will get asked about the cookie use permission on the first visit).
They're likely prescribed by PR people who think of everyone in bulk and less intelligent than themselves. The people actually building the site probably hate it.
Yeah, but now I have to manage on a per site basis about half a dozen different settings. I find it a necessary evil on mobile to control bandwidth usage, but on desktop I find it easier to just not visit or immediately leave low quality websites.
Think of it this way: there is always a GROUP BY (implicitly), i.e. GROUP BY rownum(). So there is always a big group with all the rows (or rather many small groups).
> Positive Technologies found that both of these checks can be bypassed using a device which intercepts communication between the card and the payment terminal. This device acts as a proxy and is known to conduct man in the middle (MITM) attacks. First, the device tells the card that verification is not necessary, even though the amount is greater than £30. The device then tells the terminal that verification has already been made by another means. This attack is possible because Visa does not require issuers and acquirers to have checks in place that block payments without presenting the minimum verification.
That's the first time I hear about RFID/NFC MITM, neat.
> That's the first time I hear about RFID/NFC MITM, neat.
That's been a thing for quite a few years now in the context of pentesting, e.g. for badge cloning / proxying for access control systems, see for example [1] for an overview presentation. There's quite a few BlackHat talks on that space that give a good overview at this point. This attack is intruiging since it circumvents more complex measures by manipulating the communication and obviously has practical and direct impact on a monetary asset.
I've read elsewhere ([2], German) that Visa declines to fix this with the explanation that it would require attackers to steal the card in the first place and is technologically too complex to be seen in the real world, which is kind of weird. The hardware required is pretty accessible at this point but I guess their risk assessment determined that the actually occurring fraud with this method is currently not worth fixing anything.
At my work we use TeamCity for some things and Gitlab CI for others. Things that are good about TeamCity:
- Templates
Gitlab has something called templates but it's a very different thing. In Gitlab, a template is used to bootstrap a project, but that's it. In TeamCity a template is attached to a project such that if you change the template, changes are applied to all projects that inherit from the template. Each project can override any settings or build steps it got from the template, without losing the association to other settings. A project can have multiple templates attached to control orthogonal aspects of its behavior. From a template, you can see what projects inherit from it, and you can freely detach and attach to a different template. It makes managing a large number of projects with similar configs, that all evolve at somewhat different rates really easy.
- Build results
Teamcity has very good integration with xUnit and code coverage tools to quickly see test results and coverage as part of a build. Gitlab recently got better at this (it can now at least parse xUnit results), but you can still only see test results in the merge request view. TeamCity can also do things like track a metric over time and fail a build if it drops (i.e. PR builds should fail if code coverage drops more than X %). TeamCity also supports adding custom tabs to the build page so that you can attach reports generated by the build easily viewable in the UI (vs in Gitlab where you have to download the artifact and then open it to view)
- Overall view of runner status
It's very easy in TeamCity to see the build queue, and an estimate of when your build will run, and how long it's expected to take based on past builds.
-Dashboard
For me it's easier in TeamCity to see the overall status of deployments to a set of environments (i.e. what's on dev/stage/prod) that might span multiple source code repos. At a glance I can see what changes are pending for each environment, etc. In Gitlab things are too tied to a single repo or a single environment, and the pages tend to present either too much or too little information. Also, in TeamCity I can configure my own dashboard to see all of the stuff I care about and hide other things, all in one place.
- System wide configs
There are some settings that apply to the whole system (repository urls, etc). There's no easy way in Gitlab to have system wide settings, they have to be defined at the group or repository level. In TeamCity, you can configure things at any level, and then override at lower levels.
- Extensibility
TeamCity supports plugins. I know this can lead to the Jenkins problem of too many plugin versions, etc, but in TeamCity you tend to use far less plugins, and the plugin APIs have been super stable (I've written plugins against TeamCity 8 which is 4 major versions old and they work fine on the latest). It's really nice to be able to write a plugin that can perform common behavior and have it easily apply across projects and be nicely integrated into the UI.
To me, overall Gitlab CI seems useful for simple things, but overall it's 70% of the way to being something that could replace TeamCity.
We did recently add pipeline info to the operations dashboard (https://docs.gitlab.com/ee/user/operations_dashboard/), which I know isn't exactly what you're looking for here but we are making progress in this direction and recognize the gap.
This can be achieved by using includes to set the variables, which is admittedly a workaround. We do have an open issue (https://gitlab.com/gitlab-org/gitlab-ce/issues/3897) to implement instance level variables that would solve this.
- Extensibility
This is an interesting one because Plugins are, at least in my opinion, what makes Jenkins a mess to use in reality and believe me, I've managed plenty of Jenkins instances in my career with lots of "cool" plugins that do something great, at least while they work. It is one of our values that we play well with others, though, so I'd be curious to work with you to understand specifically what you'd like to be able to make GitLab do that can't be done through your .gitlab-ci.yml. Our goal is that you should never be blocked, or really have to jump through hoops, but still not have to be dependent on a lot of your own code or third party plugins.
I hear you on plugins, and I agree they are problematic. I went back and forth on whether to include this or not TBH.
I'll give you a couple of examples of use cases for plugins:
We have an artifact repo that can store NPM, Python and other artifacts (Nexus if you're interested). I wrote a plugin for TeamCity that can grab artifacts from a build and upload them to the repository. Obviously this can be done in a script, but there are a couple of things that make doing it in a plugin nice:
- You can set it up as a reusable build feature that can be inherited from templates (i.e. all builds of a particular type publish artifacts to Nexus)
- You can get nice UI support. The plugin contributes a tab to the build page that links to the artifacts in Nexus.
- The plugin can tie in to the build cleanup process, and remove the artifacts from the repository when the build is cleaned up. This is useful for snapshot/temporary artifacts that you want to publish so people can test with, but have automatically removed later.
Another example of where plugins have proved useful is influencing build triggering: we have some things that happen in the build server, and then other stuff happens outside of the build server. When all that completes, we then want to kick off another process in the build server (that sounds abstract - think an external deploy process runs, and once the deploy stabilizes you kick off QA jobs). In TeamCity you can write a plugin that keeps builds in the queue until the plugin reports that they are ready to run.
While plugins aren't the first tool I reach for when looking at how to provide reusable functionality in a build server, I have written several plugins for both Jenkins and TeamCity. Overall, I don't think Jenkins/TeamCity's model of having plugins run in-process is a good one, and it leads to most of the problems people have with them (although TeamCity is much better here: Jenkins basically exposes most of its guts to plugins which makes keeping the API stable virtually impossible, while TeamCity has APIs specifically designed for plugins that they've been able to keep stable very effectively) A model where a plugin was just a Docker container that communicated with the build server through some defined APIs, combined with some way for it to attach UI elements to a build that could then call back into the plugin would be much nicer. This seems to be more like what Drone is doing, but haven't played around a lot with that.
I think Gitlab has a strong philosophy of wanting to build out everything that anyone will ever need, all nicely integrated, and that's a great ideal. I think in practice, it's REALLY hard to be all things to all people. People have existing systems and/or weird use cases that it just doesn't make sense to handle all of, and plugins are a useful tool in addressing that.
If you work at gitlab, you can download the free version of TeamCity on their website. Setup a few projects and it will be obvious what it does better.
You may want to try a C#, java, python and a go projects to see the differences, with slaves on Windows and Linux. There are some pretty tight integrations for some of these.