What is the one thing all malware and viruses depend upon to operate?

  • SpinRite v6.1 Release #3
    Guest:
    The 3rd release of SpinRite v6.1 is published and may be obtained by all SpinRite v6.0 owners at the SpinRite v6.1 Pre-Release page. (SpinRite will shortly be officially updated to v6.1 so this page will be renamed.) The primary new feature, and the reason for this release, was the discovery of memory problems in some systems that were affecting SpinRite's operation. So SpinRite now incorporates a built-in test of the system's memory. For the full story, please see this page in the "Pre-Release Announcements & Feedback" forum.
    /Steve.
  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

JimB

Member
Oct 13, 2023
10
1
The ability to manipulate specific code in the OS based on known location and known filenames. There are few that don't behave in this manner.

What would happen if, on installation of the OS, an encrypted database, based on your master entropic password, were established that would store every .exe, .com, .dll, .ini.....any and all named component's names of an OS's visible and hidden filenames, and encrypt them and replace those names with the generated lookup table's equivalent hashed random filename.

Even fileless malware depends on the ability to manipulate some code somewhere on the system or registry entry to gain a foothold to perform it's further operations. Even going direct to memory and operating from there still requires some access to the file system....except for those that come as a complete package that would require no local resources but I haven't seen many of those.

Yeah, you'd give up a few CPU cycles in overhead. But essentially no worse than an entire encrypted system, which might benefit from the additional layer of obfuscation.

Flame on. I'm sure there are flaws in this proposal......just curious about a reaction.
 
A program that runs on an OS has to have access to all OS functions intended for applications, or else it can't be said to be running on an OS. What would change this is some sort of permission system that only allows "some" applications to access "some" OS functions. This means, for example, the ability to "see" the files in the system could be restricted to trusted apps that have that need (file utilities and backup tools, for example.)

The problem with having such a permission system, is that the average user doesn't want to maintain and debug it when things they feel they need to use stop working. Imagine you just spend hundreds of dollars on some great new software application (say a backup tool and its attached tape system or something) and it doesn't work right because the OS is blocking the tool. You as an average user now needs to figure out to configure your system permissions, and you can't even back up your current config with your new tool first.

You'll be wanting to say, but the software manufacturer needs to have the app do the configuration of the OS for you. Well sure, that would be great if you could trust every app manufacturer. But there's the rub... how do you tell the bad guys from the good guys. Well now you need the OS manufacturer to get involved in the decision of which apps you'll be allowed to run and which you want. This is the very system of Apple iOS and Android OS. Apple doesn't allow you to buy/install/own an application that, for example, shows nudity. Do you want your PC OS to block you from watching X rated content in return for having slightly more security against malware. You might want that, but I doubt everyone does... and I think that's the slippery slope that scares OS manufacturers from going there.
 
Thing is, if the virus/malware gets same 'status' as say notepad, it will be able to access file names etc..
Expand please. If the file names are entropic and not what they are expected to be and encrypted how would one find anything? The only one that would be able to is the owner of the encryption and file name obfuscation key. Translated in near real time and only for the user who owns those keys....
 
A program that runs on an OS has to have access to all OS functions intended for applications, or else it can't be said to be running on an OS. What would change this is some sort of permission system that only allows "some" applications to access "some" OS functions. This means, for example, the ability to "see" the files in the system could be restricted to trusted apps that have that need (file utilities and backup tools, for example.)

The problem with having such a permission system, is that the average user doesn't want to maintain and debug it when things they feel they need to use stop working. Imagine you just spend hundreds of dollars on some great new software application (say a backup tool and its attached tape system or something) and it doesn't work right because the OS is blocking the tool. You as an average user now needs to figure out to configure your system permissions, and you can't even back up your current config with your new tool first.

You'll be wanting to say, but the software manufacturer needs to have the app do the configuration of the OS for you. Well sure, that would be great if you could trust every app manufacturer. But there's the rub... how do you tell the bad guys from the good guys. Well now you need the OS manufacturer to get involved in the decision of which apps you'll be allowed to run and which you want. This is the very system of Apple iOS and Android OS. Apple doesn't allow you to buy/install/own an application that, for example, shows nudity. Do you want your PC OS to block you from watching X rated content in return for having slightly more security against malware. You might want that, but I doubt everyone does... and I think that's the slippery slope that scares OS manufacturers from going there.
As is the case now, every application must be vetted by the user. If you choose to allow the application that you "vetted" into the ecosystem, so be it. And if it's malicious that's on you. And every application you allow in has full, unfettered access to the file name obfuscation tables and encryption. The same would be true for just an encrypted system. The OS manufacturer would have zero input since everything is hidden from them as well. The only one with the keys to the kingdom is the user.

But my use case is the virus and malware you did not invite in, irrespective of user permissions. All they would see is garbage.

How do you tell the good guys from the bad guys now? 🤔
 
every application must be vetted by the user
Well that depends on your definition of application. I was using it in the sense of "executable code". Any code that executes on the OS is designed to do so, and is expecting to have access to the OS APIs in order to run. Aside from what I mentioned about permissions restricting certain executables from doing certain things, how do you handle a situation where a bug in one executable allows the exploit of it, and then the downloading and executing of another executable. It's pretty much impossible to know what is running on the system because of the user versus what is running on the system in spite of the user. Relying on users to know this distinction is folly.
 
Expand please. If the file names are entropic and not what they are expected to be and encrypted how would one find anything? The only one that would be able to is the owner of the encryption and file name obfuscation key. Translated in near real time and only for the user who owns those keys....
The malware typically gets in through user actions and this usually runs the malware as the user initially, then exploits are used to elevate permissions and gain system level access.

Since the program is running in the OS as the user, or the system, it has access to all the file names like normal. This means the automated access to decrypt and use files for the user and the operating system will work for the malware too.
 
Maybe I don't understand what you have in mind but it seems to what you propose would be in an unusable system. If I as a user want to look for a file say busing Explorer then I assume I'd be looking at the decrypted filenames. How else could I open the file I'm after?

Say I want to edit a file, click open in Notepad, then same story. I select 'document-whatever.txt' and Notepad will try open 'document-whatever.txt'. IOW Notepad has access to decrypted filenames. So, if my malware gets same privileges as Notepad it will be able to 'see' filenames.
Yup. But it was you who had given that malware the authority to do what you describe. No amount of prevention (witness LastPass engineer's faux pas in bringing down the system) mitigates the user error factor. That, you cannot, with certainty, eliminate.

FB_IMG_1567207550524.jpg
 
Maybe I don't understand what you have in mind but it seems to what you propose would be in an unusable system. If I as a user want to look for a file say busing Explorer then I assume I'd be looking at the decrypted filenames. How else could I open the file I'm after?

Say I want to edit a file, click open in Notepad, then same story. I select 'document-whatever.txt' and Notepad will try open 'document-whatever.txt'. IOW Notepad has access to decrypted filenames. So, if my malware gets same privileges as Notepad it will be able to 'see' filenames.
I have in mind the situation where a flaw is exploited that requires zero human interaction, on the part of the attacked system, to accomplish.

As originally proposed the file system gets entropically renamed (think complex password of sufficient length) that is part of a lookup table to the real file name. No worse than an encrypted system that gets decrypted "real-time" so that you see the data as it originally was. When you encrypt a system do you really expect to operate on the encrypted raw data.....
 
Last edited:
you who had given that malware the authority to do what you describe
It seems You show a lack of understanding of how software exploits run.

I guess you *could* argue that you chose to load an [unknown to you] corrupt file into your image editor which invoked an exploit in the image editor to cause it to download some malware behind your back and then launch. I would say that is quite a stretch to say that it was a user intended or chosen action.

That malware can then be written to the harddrive by the trusted paint program as some sort of plugin for the paint program, and whenever you launch it in the future, it gets all the permissions to be a problem again.

By the time the code is running on your PC, it seems pretty much impossible to tell good code from bad, without a very onerous system to manage such things, which no average user will succeed in doing without complaint. (Given that a lot of average users run Windows as an administrator at all times, because they find UAC to be too annoying/complex to manage.)
 
It seems You show a lack of understanding of how software exploits run.

I guess you *could* argue that you chose to load an [unknown to you] corrupt file into your image editor which invoked an exploit in the image editor to cause it to download some malware behind your back and then launch. I would say that is quite a stretch to say that it was a user intended or chosen action.

That malware can then be written to the harddrive by the trusted paint program as some sort of plugin for the paint program, and whenever you launch it in the future, it gets all the permissions to be a problem again.

By the time the code is running on your PC, it seems pretty much impossible to tell good code from bad, without a very onerous system to manage such things, which no average user will succeed in doing without complaint. (Given that a lot of average users run Windows as an administrator at all times, because they find UAC to be too annoying/complex to manage.)
You imply that users "choose" to run exploits. Yeah, sure they do.....

But you did import the exploit into the system, did you not? Did it just magically appear from the aether? Again, that's on you. Scroll up and look at Dave, Human Error. Saying that I don't know how exploits run is ludicrous. You know nothing about me or what I know or don't know. For that, your analysis is summarily rejected.
 
Did it just magically appear from the aether?
Those kind of exploits ALSO exist, yes... it's called a drive by zero click vulnerability. You really need to give up on the idea that you can have an OS that can tell the difference between good code and bad code, without having some authority play the role of "blesser of executables".
 
  • Like
Reactions: foreverinbeta