You start off with a nice large disk to accommodate most of the programs your users need and things are moving along just fine. Then, either you either upgrade to smaller more costly SSD drives to get a boost in performance or the vendor of your common applications increases the size of your image or perhaps another large package is needed. Either way you seem to be approaching the maximum size of your hard drive. To make matters worse each time a new user logs onto the system a local profile is generated taking up a bit more space until the disk is full. You need to find some room and free up precious disk space. Here are the methods we have been using to unlock some extra space.
Method One: Change the page file size
By default Windows manages your page file size, it may be time to take back control. You can send this script to the systems to set an initial page file size of 2GB and a max of 4GB
wmic computersystem where name="%computername%" set AutomaticManagedPagefile=False
wmic pagefileset where name="C:\\pagefile.sys" set InitialSize=2048,MaximumSize=4096
The page file adjustment will take effect after a reboot.
Method Two: Disable search indexing
The search index in Windows can take up a bit of space. If you happen to not need it then you can be rid of those indexes with this command.
sc config WSearch start= disabled
Method Three: Empty all recycle bins
Users may have deleted large files to help you in your quest to keep things clean but they may not realize that files still take up room unless they empty the files from their recycle bins. This will get rid of all the recycle bin contents on a system.
:: Empty all recycle bins on Windows 7 and up
rmdir /s /q %SystemDrive%\$Recycle.Bin 2>NUL
:: This empties all recycle bins on Windows XP and Server 2003
rmdir /s /q %SystemDrive%\RECYCLER 2>NUL
:: Return exit code to calling application
exit /B 0
Method Four: Clean up Windows Update files
Microsoft doesn't officially condone this activity but it could be handy if you update your systems by re-imaging and don't rely much on automatic updates. Please note that this can break the automatic updates system in some cases. At a minimum, it will cause the update directories to rebuild. Consider making changes to the auto update settings if this change is interfering with updates.
net stop wuauserv
del c:\windows\SoftwareDistribution /q /s
net start wuauserv
Method Five: Turn off hibernation
Windows keep a large system file (hiberfil.sys) on the system drive to facilitate system hibernation. If your sleep settings are such that you don't need hibernation then you can remove this file by turning it off with this command.
powercfg.exe -h off
Method Six: Clean up temp files
There are a number of log and cache files you may not need here is an exhaustive script that comes to us via PDQDeploy and reddit.
:: Purpose: Temp file cleanup
:: Requirements: Admin access helps but is not required
:: Author: reddit.com/user/vocatus ( vocatus.gate at gmail ) // PGP key: 0x07d1490f82a211a2
:: Version: 3.5.8 ! Move IE ClearMyTracksByProcess to Vista and up section (does not run on XP/2003)
:: 3.5.7 * Add /u/neonicacid's suggestion to purge leftover NVIDIA driver installation files
:: 3.5.6 * Merge nemchik's pull request to delete .blf and.regtrans-ms files (ported from Tron project)
:: * Merge nemchik's pull request to purge Flash and Java temp locations (ported from Tron project)
:: 3.5.5 + Add purging of additional old Windows version locations (left in place from Upgrade installations)
:: 3.5.4 + Add purging of queued Windows Error Reporting reports. Thanks to /u/neonicacid
:: 3.5.3 * Add removal of C:\HP folder
:: 3.5.2 * Improve XP/2k3 detection by removing redundant code
:: 3.5.1 ! Fix stall error on C:\Windows.old cleanup; was missing /D Y flag to answer "yes" to prompts. Thanks to /u/Roquemore92
:: 3.5.0 + Add removal of C:\Windows.old folder if it exists (left over from in-place Windows version upgrades). Thanks to /u/bodkov
:: 3.4.5 * Add cleaning of Internet Explorer using Windows built-in method. Thanks to /u/cuddlychops06
:: <-- outdated changelog comments removed -->
:: 1.0.0 Initial write
SETLOCAL
:::::::::::::::
:: VARIABLES :: -------------- These are the defaults. Change them if you so desire. --------- ::
:::::::::::::::
:: Set your paths here. Don't use trailing slashes (\) in directory paths
set LOGPATH=%SystemDrive%\Logs
set LOGFILE=%COMPUTERNAME%_TempFileCleanup.log
:: Max log file size allowed in bytes before rotation and archive. 1048576 bytes is one megabyte
set LOG_MAX_SIZE=104857600
:: --------------------------- Don't edit anything below this line --------------------------- ::
:::::::::::::::::::::
:: PREP AND CHECKS ::
:::::::::::::::::::::
@echo off
%SystemDrive% && cls
set SCRIPT_VERSION=3.5.8
set SCRIPT_UPDATED=2015-09-22
:: Get the date into ISO 8601 standard format (yyyy-mm-dd) so we can use it
FOR /f %%a in ('WMIC OS GET LocalDateTime ^| find "."') DO set DTS=%%a
set CUR_DATE=%DTS:~0,4%-%DTS:~4,2%-%DTS:~6,2%
title [TempFileCleanup v%SCRIPT_VERSION%]
:::::::::::::::::::::::
:: LOG FILE HANDLING ::
:::::::::::::::::::::::
:: Make the logfile if it doesn't exist
if not exist %LOGPATH% mkdir %LOGPATH%
if not exist %LOGPATH%\%LOGFILE% echo. > %LOGPATH%\%LOGFILE%
:: Check log size. If it's less than our max, then jump to the cleanup section
for %%R in (%LOGPATH%\%LOGFILE%) do IF %%~zR LSS %LOG_MAX_SIZE% goto os_version_detection
:: If the log was too big, go ahead and rotate it.
pushd %LOGPATH%
del %LOGFILE%.ancient 2>NUL
rename %LOGFILE%.oldest %LOGFILE%.ancient 2>NUL
rename %LOGFILE%.older %LOGFILE%.oldest 2>NUL
rename %LOGFILE%.old %LOGFILE%.older 2>NUL
rename %LOGFILE% %LOGFILE%.old 2>NUL
popd
::::::::::::::::::::::::::
:: OS VERSION DETECTION ::
::::::::::::::::::::::::::
:os_version_detection
:: Detect the version of Windows we're on. This determines a few things later in the script
set WIN_VER=undetected
for /f "tokens=3*" %%i IN ('reg query "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion" /v ProductName ^| Find "ProductName"') DO set WIN_VER=%%i %%j
::::::::::::::::::::::::::
:: USER CLEANUP SECTION :: -- Most stuff in here doesn't require Admin rights
::::::::::::::::::::::::::
:: Create the log header for this job
echo -------------------------------------------------------------------------------------------->> %LOGPATH%\%LOGFILE%
echo %CUR_DATE% %TIME% TempFileCleanup v%SCRIPT_VERSION%, executing as %USERDOMAIN%\%USERNAME%>> %LOGPATH%\%LOGFILE%
echo -------------------------------------------------------------------------------------------->> %LOGPATH%\%LOGFILE%
echo.
echo Starting temp file cleanup
echo --------------------------
echo.
echo Cleaning USER temp files...
::::::::::::::::::::::
:: Version-agnostic :: (these jobs run regardless of OS version)
::::::::::::::::::::::
:: Create log line
echo. >> %LOGPATH%\%LOGFILE% %% echo ! Cleaning USER temp files...>> %LOGPATH%\%LOGFILE% %% echo. >> %LOGPATH%\%LOGFILE%
:: User temp files, history, and random My Documents stuff
del /F /S /Q "%TEMP%" >> %LOGPATH%\%LOGFILE% 2>NUL
:: Previous Windows versions cleanup. These are left behind after upgrading an installation from XP/Vista/7/8 to a higher version. Thanks to /u/bodkov and others
if exist %SystemDrive%\Windows.old\ (
takeown /F %SystemDrive%\Windows.old\* /R /A /D Y
echo y| cacls %SystemDrive%\Windows.old\*.* /C /T /grant administrators:F
rmdir /S /Q %SystemDrive%\Windows.old\
)
if exist %SystemDrive%\$Windows.~BT\ (
takeown /F %SystemDrive%\$Windows.~BT\* /R /A
icacls %SystemDrive%\$Windows.~BT\*.* /T /grant administrators:F
rmdir /S /Q %SystemDrive%\$Windows.~BT\
)
if exist %SystemDrive%\$Windows.~WS (
takeown /F %SystemDrive%\$Windows.~WS\* /R /A
icacls %SystemDrive%\$Windows.~WS\*.* /T /grant administrators:F
rmdir /S /Q %SystemDrive%\$Windows.~WS\
)
::::::::::::::::::::::
:: Version-specific :: (these jobs run depending on OS version)
::::::::::::::::::::::
:: First block handles XP/2k3, second block handles Vista and up
:: Read 9 characters into the WIN_VER variable. Only versions of Windows older than Vista had "Microsoft" as the first part of their title,
:: so if we don't find "Microsoft" in the first 9 characters we can safely assume we're not on XP/2k3.
if /i "%WIN_VER:~0,9%"=="Microsoft" (
for /D %%x in ("%SystemDrive%\Documents and Settings\*") do (
del /F /Q "%%x\Local Settings\Temp\*" 2>NUL
del /F /Q "%%x\Recent\*" 2>NUL
del /F /Q "%%x\Local Settings\Temporary Internet Files\*" 2>NUL
del /F /Q "%%x\Local Settings\Application Data\ApplicationHistory\*" 2>NUL
del /F /Q "%%x\My Documents\*.tmp" 2>NUL
del /F /Q "%%x\Application Data\Sun\Java\*" 2>NUL
del /F /Q "%%x\Application Data\Adobe\Flash Player\*" 2>NUL
del /F /Q "%%x\Application Data\Macromedia\Flash Player\*" 2>NUL
)
) else (
for /D %%x in ("%SystemDrive%\Users\*") do (
del /F /Q "%%x\AppData\Local\Temp\*" 2>NUL
del /F /Q "%%x\AppData\Roaming\Microsoft\Windows\Recent\*" 2>NUL
del /F /Q "%%x\AppData\Local\Microsoft\Windows\Temporary Internet Files\*" 2>NUL
del /F /Q "%%x\My Documents\*.tmp" 2>NUL
del /F /Q "%%x\AppData\LocalLow\Sun\Java\*" 2>NUL
del /F /Q "%%x\AppData\Roaming\Adobe\Flash Player\*" 2>NUL
del /F /Q "%%x\AppData\Roaming\Macromedia\Flash Player\*" 2>NUL
del /F /Q "%%x\AppData\Local\Microsoft\Windows\*.blf" 2>NUL
del /F /Q "%%x\AppData\Local\Microsoft\Windows\*.regtrans-ms" 2>NUL
del /F /Q "%%x\*.blf" 2>NUL
del /F /Q "%%x\*.regtrans-ms" 2>NUL
:: Internet Explorer cleanup
rundll32.exe inetcpl.cpl,ClearMyTracksByProcess 4351
)
)
echo. && echo Done. && echo.
echo. >> %LOGPATH%\%LOGFILE% && echo Done.>> %LOGPATH%\%LOGFILE% && echo. >>%LOGPATH%\%LOGFILE%
::::::::::::::::::::::::::::
:: SYSTEM CLEANUP SECTION :: -- Most stuff here requires Admin rights
::::::::::::::::::::::::::::
echo.
echo Cleaning SYSTEM temp files...
echo Cleaning SYSTEM temp files... >> %LOGPATH%\%LOGFILE% && echo.>> %LOGPATH%\%LOGFILE%
::::::::::::::::::::::
:: Version-agnostic :: (these jobs run regardless of OS version)
::::::::::::::::::::::
:: JOB: System temp files
del /F /S /Q "%WINDIR%\TEMP\*" >> %LOGPATH%\%LOGFILE% 2>NUL
:: JOB: Root drive garbage (usually C drive)
rmdir /S /Q %SystemDrive%\Temp >> %LOGPATH%\%LOGFILE% 2>NUL
for %%i in (bat,txt,log,jpg,jpeg,tmp,bak,backup,exe) do (
del /F /Q "%SystemDrive%\*.%%i">> "%LOGPATH%\%LOGFILE%" 2>NUL
)
:: JOB: Remove files left over from installing Nvidia/ATI/AMD/Dell/Intel/HP drivers
for %%i in (NVIDIA,ATI,AMD,Dell,Intel,HP) do (
rmdir /S /Q "%SystemDrive%\%%i" 2>NUL
)
:: JOB: Clear additional unneeded files from NVIDIA driver installs
if exist "%ProgramFiles%\Nvidia Corporation\Installer2" rmdir /s /q "%ProgramFiles%\Nvidia Corporation\Installer2"
if exist "%ALLUSERSPROFILE%\NVIDIA Corporation\NetService" del /f /q "%ALLUSERSPROFILE%\NVIDIA Corporation\NetService\*.exe"
:: JOB: Remove the Microsoft Office installation cache. Usually around ~1.5 GB
if exist %SystemDrive%\MSOCache rmdir /S /Q %SystemDrive%\MSOCache >> %LOGPATH%\%LOGFILE%
:: JOB: Remove the Microsoft Windows installation cache. Can be up to 1.0 GB
if exist %SystemDrive%\i386 rmdir /S /Q %SystemDrive%\i386 >> %LOGPATH%\%LOGFILE%
:: JOB: Empty all recycle bins on Windows 5.1 (XP/2k3) and 6.x (Vista and up) systems
if exist %SystemDrive%\RECYCLER rmdir /s /q %SystemDrive%\RECYCLER
if exist %SystemDrive%\$Recycle.Bin rmdir /s /q %SystemDrive%\$Recycle.Bin
:: JOB: Clear queued and archived Windows Error Reporting (WER) reports
echo. >> %LOGPATH%\%LOGFILE%
if exist "%USERPROFILE%\AppData\Local\Microsoft\Windows\WER\ReportArchive" rmdir /s /q "%USERPROFILE%\AppData\Local\Microsoft\Windows\WER\ReportArchive"
if exist "%USERPROFILE%\AppData\Local\Microsoft\Windows\WER\ReportQueue" rmdir /s /q "%USERPROFILE%\AppData\Local\Microsoft\Windows\WER\ReportQueue"
if exist "%ALLUSERSPROFILE%\Microsoft\Windows\WER\ReportArchive" rmdir /s /q "%ALLUSERSPROFILE%\Microsoft\Windows\WER\ReportArchive"
if exist "%ALLUSERSPROFILE%\Microsoft\Windows\WER\ReportQueue" rmdir /s /q "%ALLUSERSPROFILE%\Microsoft\Windows\WER\ReportQueue"
:: JOB: Windows update logs & built-in backgrounds (space waste)
del /F /Q %WINDIR%\*.log >> %LOGPATH%\%LOGFILE% 2>NUL
del /F /Q %WINDIR%\*.txt >> %LOGPATH%\%LOGFILE% 2>NUL
del /F /Q %WINDIR%\*.bmp >> %LOGPATH%\%LOGFILE% 2>NUL
del /F /Q %WINDIR%\*.tmp >> %LOGPATH%\%LOGFILE% 2>NUL
del /F /Q %WINDIR%\Web\Wallpaper\*.* >> %LOGPATH%\%LOGFILE% 2>NUL
rmdir /S /Q %WINDIR%\Web\Wallpaper\Dell >> %LOGPATH%\%LOGFILE% 2>NUL
:: JOB: Flash cookies (both locations)
rmdir /S /Q "%APPDATA%\Macromedia\Flash Player\#SharedObjects" >> %LOGPATH%\%LOGFILE% 2>NUL
rmdir /S /Q "%APPDATA%\Macromedia\Flash Player\macromedia.com\support\flashplayer\sys" >> %LOGPATH%\%LOGFILE% 2>NUL
::::::::::::::::::::::
:: Version-specific :: (these jobs run depending on OS version)
::::::::::::::::::::::
:: JOB: Windows XP/2k3: "guided tour" annoyance
if /i "%WIN_VER:~0,9%"=="Microsoft" (
del %WINDIR%\system32\dllcache\tourstrt.exe 2>NUL
del %WINDIR%\system32\dllcache\tourW.exe 2>NUL
rmdir /S /Q %WINDIR%\Help\Tours 2>NUL
)
:: JOB: Windows Server: remove built-in media files (all Server versions)
echo %WIN_VER% | findstr /i /%SystemDrive%"server" >NUL
if %ERRORLEVEL%==0 (
echo.
echo ! Server operating system detected.
echo Removing built-in media files ^(.wav, .midi, etc^)...
echo.
echo. >> %LOGPATH%\%LOGFILE% && echo ! Server operating system detected. Removing built-in media files ^(.wave, .midi, etc^)...>> %LOGPATH%\%LOGFILE% && echo. >> %LOGPATH%\%LOGFILE%
:: 2. Take ownership of the files so we can actually delete them. By default even Administrators have Read-only rights.
echo Taking ownership of %WINDIR%\Media in order to delete files... && echo.
echo Taking ownership of %WINDIR%\Media in order to delete files... >> %LOGPATH%\%LOGFILE% && echo. >> %LOGPATH%\%LOGFILE%
if exist %WINDIR%\Media takeown /f %WINDIR%\Media /r /d y >> %LOGPATH%\%LOGFILE% 2>NUL && echo. >> %LOGPATH%\%LOGFILE%
if exist %WINDIR%\Media icacls %WINDIR%\Media /grant administrators:F /t >> %LOGPATH%\%LOGFILE% && echo. >> %LOGPATH%\%LOGFILE%
:: 3. Do the cleanup
rmdir /S /Q %WINDIR%\Media>> %LOGPATH%\%LOGFILE% 2>NUL
echo Done.
echo.
echo Done. >> %LOGPATH%\%LOGFILE%
echo. >> %LOGPATH%\%LOGFILE%
)
:: JOB: Windows CBS logs
:: these only exist on Vista and up, so we look for "Microsoft", and assuming we don't find it, clear out the folder
echo %WIN_VER% | findstr /i /%SystemDrive%"server" >NUL
if not %ERRORLEVEL%==0 del /F /Q %WINDIR%\Logs\CBS\* >> %LOGPATH%\%LOGFILE% 2>NUL
:: JOB: Windows XP/2003: Cleanup hotfix uninstallers. They use a lot of space so removing them is beneficial.
:: Really we should use a tool that deletes their corresponding registry entries, but oh well.
:: 0. Check Windows version.
:: We simply look for "Microsoft" in the version name, because only versions prior to Vista had the word "Microsoft" as part of their version name
:: Everything after XP/2k3 drops the "Microsoft" prefix
echo %WIN_VER% | findstr /i /%SystemDrive%"Microsoft" >NUL
if %ERRORLEVEL%==0 (
:: 1. If we made it here we're doing the cleanup. Notify user and log it.
echo.
echo ! Windows XP/2003 detected.
echo Removing hotfix uninstallers...
echo.
echo. >> %LOGPATH%\%LOGFILE% && echo ! Windows XP/2003 detected. Removing hotfix uninstallers...>> %LOGPATH%\%LOGFILE%
:: 2. Build the list of hotfix folders. They always have "$" signs around their name, e.g. "$NtUninstall092330$" or "$hf_mg$"
pushd %WINDIR%
dir /A:D /B $*$ > %TEMP%\hotfix_nuke_list.txt 2>NUL
:: 3. Do the hotfix clean up
for /f %%i in (%TEMP%\hotfix_nuke_list.txt) do (
echo Deleting %%i...
echo Deleted folder %%i >> %LOGPATH%\%LOGFILE%
rmdir /S /Q %%i >> %LOGPATH%\%LOGFILE% 2>NUL
)
:: 4. Log that we are done with hotfix cleanup and leave the Windows directory
echo Done. >> %LOGPATH%\%LOGFILE% && echo.>> %LOGPATH%\%LOGFILE%
echo Done.
del %TEMP%\hotfix_nuke_list.txt>> %LOGPATH%\%LOGFILE%
echo.
popd
)
echo Done. && echo.
echo Done.>> %LOGPATH%\%LOGFILE% && echo. >>%LOGPATH%\%LOGFILE%
::::::::::::::::::::::::::
:: Cleanup and complete ::
::::::::::::::::::::::::::
:complete
@echo off
echo -------------------------------------------------------------------------------------------->> %LOGPATH%\%LOGFILE%
echo %CUR_DATE% %TIME% TempFileCleanup v%SCRIPT_VERSION%, finished. Executed as %USERDOMAIN%\%USERNAME%>> %LOGPATH%\%LOGFILE%>> %LOGPATH%\%LOGFILE%
echo -------------------------------------------------------------------------------------------->> %LOGPATH%\%LOGFILE%
echo.
echo Cleanup complete.
echo.
echo Log saved at: %LOGPATH%\%LOGFILE%
echo.
ENDLOCAL
Method Seven: Set a logout policy to clear the downloads folder
You might consider setting either an AD based group policy or a local logout policy to run the following script:
Option Explicit
' Script to clean up FOLDERID_Downloads.
' Version 1.0.0
' 2016-02-05
' Cactus Data. Gustav Brock
Const USERPROFILE = &H28
Const FolderDownloads = "Downloads"
Dim objFSO
Dim objAppShell
Dim objDownloadsFolder
Dim strDownloadsFolder
Dim strUserProfilerFolder
' Enable simple error handling.
On Error Resume Next
' Find user's Downloads folder.
Set objAppShell = CreateObject("Shell.Application")
Set objDownloadsFolder = objAppShell.Namespace(USERPROFILE)
strUserProfilerFolder = objDownloadsFolder.Self.Path
strDownloadsFolder = strUserProfilerFolder & "\" & FolderDownloads
' Create the File System Object.
Set objFSO = CreateObject("Scripting.FileSystemObject")
If Not objFSO.FolderExists(strDownloadsFolder) Then
Call ErrorHandler("No access to " & strDownloadsFolder & ".")
End If
' Delete files.
objFSO.DeleteFile(strDownloadsFolder & "\*.*")
Set objDownloadsFolder = Nothing
Set objFSO = Nothing
Set objAppShell = Nothing
WScript.Quit
' Supporting subfunctions
' -----------------------
Sub ErrorHandler(Byval strMessage)
Set objRemoteFolder = Nothing
Set objLocalFolder = Nothing
Set objLocalAppDataFolder = Nothing
Set objDesktopFolder = Nothing
Set objAppShell = Nothing
Set objFSO = Nothing
WScript.Echo strMessage
WScript.Quit
End Sub
Method Seven: Clean out old user profiles
Lets face it users sometimes put large files in a nonstandard location. Instead of searching through their folders to find those files you may decide it's best just to nuke their local profile if they haven't been using the system for a while. You can get Delprof2.exe from here and run it with the following parameter to remove profiles older than two weeks (14 days). Be sure to exclude any profiles you may not want removed, shown below with the Administrator, Public, and MSSQL profiles excluded.
/d:14 /q /ed:Default* /ed:Administrator* /ed:Public* /ed:MSSQL* /i
The SYS Admin Blog
Discussion and solutions for various issues spanning a wide range of technologies.
Tuesday, October 4, 2016
Wednesday, July 23, 2014
MDT 2013 Task Sequence Stops After Reboot - MS Security Essentials
I was having quite the issue with one of my MDT images. The reference computer seemed to sysprep and capture fine. I got a wim file and no errors were thrown by MDT.
When I went to deploy this image it would push down to the disk just fine and initiate a reboot. It's here that things stopped. No autologin.. no domain join... nothing. It simply sat at the logon window. I thought it might be a group policy enforcing the CTRL - ALT - DEL. Disabling this didn't help.
The last entry in the logs was simply a notice that a reboot was initiated.. all other messages indicated that the process was running fine.
It turns out in my case all of this was due to an unclean uninstall of Microsoft Security Essentials. We had it in the image at first but realized it was prompting users with a wizard after deployment so we decided to push it out later in our process. Uninstalling the program removed the program files but left behind a registry key telling sysprep to reference a .dll. Poor clean up on Microsoft's part.
We ran a simple sysprep task sequence on the machine as opposed to the full process of sysprep and capture. After this process we we noticed the setuperr.log in C:\Windows\System32\sysprep\Panther folder has an entry similar to this:
SYSPRP LaunchDll:Could not load DLL c:\Program Files\Microsoft Security Client\MSESysprep.dll
Indicating a fatal error and that sysprep was stopping. Sysprep was not completing, therefore OOBE would not run on the resulting deployed image and the run synchronous commands for an auto logon and LTIBootstrap.vbs would never get triggered... this all started to make sense now.
If you search the registry in this location:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\Sysprep\Cleanup
You will find a key that has a value referencing c:\Program Files\Microsoft Security Client\MSESysprep.dll as well as a few other security essential files.
Remove this key, you may need to alter permissions on the parent folder to do this as the system account has permissions but not the local administrator by default.
You should now be able to successfully sysprep your image, capture, and deploy it.
When I went to deploy this image it would push down to the disk just fine and initiate a reboot. It's here that things stopped. No autologin.. no domain join... nothing. It simply sat at the logon window. I thought it might be a group policy enforcing the CTRL - ALT - DEL. Disabling this didn't help.
The last entry in the logs was simply a notice that a reboot was initiated.. all other messages indicated that the process was running fine.
It turns out in my case all of this was due to an unclean uninstall of Microsoft Security Essentials. We had it in the image at first but realized it was prompting users with a wizard after deployment so we decided to push it out later in our process. Uninstalling the program removed the program files but left behind a registry key telling sysprep to reference a .dll. Poor clean up on Microsoft's part.
We ran a simple sysprep task sequence on the machine as opposed to the full process of sysprep and capture. After this process we we noticed the setuperr.log in C:\Windows\System32\sysprep\Panther folder has an entry similar to this:
SYSPRP LaunchDll:Could not load DLL c:\Program Files\Microsoft Security Client\MSESysprep.dll
Indicating a fatal error and that sysprep was stopping. Sysprep was not completing, therefore OOBE would not run on the resulting deployed image and the run synchronous commands for an auto logon and LTIBootstrap.vbs would never get triggered... this all started to make sense now.
If you search the registry in this location:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Setup\Sysprep\Cleanup
You will find a key that has a value referencing c:\Program Files\Microsoft Security Client\MSESysprep.dll as well as a few other security essential files.
Remove this key, you may need to alter permissions on the parent folder to do this as the system account has permissions but not the local administrator by default.
You should now be able to successfully sysprep your image, capture, and deploy it.
Monday, October 14, 2013
Allow Oracle Users to Remove Their Own Locks
Somehow your oracle users have an uncanny ability to leave locks on their tables making them inaccessible for an unusually long time. This could happen due to poor commit practises or simply because a remote session timed out at the wrong instant. You would like to avoid having the issue escalated by giving the user the authority to remove the locks themselves. We can do this using a stored procedure to kill the offending sessions.
The desired procedure should accomplish the following:
Logged in as a user that has ALTER SYSTEM privileges create the following stored procedure.
The desired procedure should accomplish the following:
- User can kill only those sessions they own. In other words they can't kill other user's sessions if they happen to know other user ids.
- Kills will occur in sessions the user holds other than the current session. When they are connected (or reconnected) in an attempt to perform the unlock we don't want to kill the session they are currently using.
Logged in as a user that has ALTER SYSTEM privileges create the following stored procedure.
create or replace procedure selfkillSessionProc IS
strNeeded VARCHAR2(50);
cursor_name pls_integer
default dbms_sql.open_cursor;
ignore
pls_integer;
BEGIN
FOR x in (SELECT s.inst_id,
s.sid,
s.serial#,
p.spid,
s.username,
s.program
FROM gv$session s
JOIN gv$process p ON
p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE s.type != 'BACKGROUND' AND s.username in (SELECT
SYS_CONTEXT ('USERENV', 'SESSION_USER') FROM DUAL) AND
s.sid NOT IN (select sid from
V$SESSION where AUDSID = userenv('SESSIONID')))
LOOP
DBMS_OUTPUT.PUT_LINE(x.sid || ','
|| x.serial#);
strNeeded := '''' || x.sid || ','
|| x.serial# || '''' ;
DBMS_OUTPUT.PUT_LINE(strNeeded);
dbms_sql.parse(cursor_name,
'alter system kill session '''
||x.sid||','||x.serial#||'''',
dbms_sql.native);
ignore :=
dbms_sql.execute(cursor_name);
END LOOP;
END;
The above procedure ultimately uses the ALTER SYSTEM command to end the current user's other sessions. To do this we need to gather the session IDs and the session serial numbers from the V$SESSION table.
Additionally, the ALTER SYSTEM command cannot be called directly from a stored procedure so we use a DBMS parser to run the command.
Finally, give your users permission to use the stored procedure.
grant execute on selfkillSessionProc to public;
Tuesday, June 11, 2013
vCloud Director 5.1.2 Removing Org VDC Networks
I ran into a little issue the other day configuring vcloud director. I wanted to remove an organization VDC completely. Of course to do this you must disable the VDC and remove all of it's resources (networks, templates, and vApps). Upon trying to remove the networks I kept getting the error
Entity xxx.xxx.xxx.xxx cannot be deleted, because it is in use.
This error occurs even though there is absolutely nothing using the network; no templates, VMs, vApps, ....nothing. Our networks were of the direct connect type, although this may also occur with an Edge Gateway network. We didn't want to remove the upper level networks from the entire cloud. The system seems to want to hold on to these networks once they were added for the first time. It seems that we may not be able to remove these networks from the system, but we were able to move them to a new organization VDC. We did this directly on the vcloud database.
Take a look at what you have in the table. The records you care about have a lr_type of NETWORK.
Once you have the IDs from the table you can update the records accordingly.
In the change query below the first parameter (vdc_id) is the organization you want the network to move to. You can usually find the proper id by noting the vcd_id of the record that has the organization name in the 'name' field. The second parameter (id) is the network you want to change. In other words, you are going to set the network's organization id field.
Another unique issue we were seeing occurs when we try to disable sharing this network with other VDCs in the organization. If we try to uncheck the box in it's properties we get the following error in vCloud Director:
Not sure what this one is about. Seems it could be an out of bounds exception with the program accessing a list.
Entity xxx.xxx.xxx.xxx cannot be deleted, because it is in use.
This error occurs even though there is absolutely nothing using the network; no templates, VMs, vApps, ....nothing. Our networks were of the direct connect type, although this may also occur with an Edge Gateway network. We didn't want to remove the upper level networks from the entire cloud. The system seems to want to hold on to these networks once they were added for the first time. It seems that we may not be able to remove these networks from the system, but we were able to move them to a new organization VDC. We did this directly on the vcloud database.
/******
First lets see what is in the table
******/
SELECT TOP 1000 [id]
,[vdc_id]
,[lr_type]
,[name]
FROM [vcloud].[dbo].[vdc_logical_resource]
Take a look at what you have in the table. The records you care about have a lr_type of NETWORK.
Once you have the IDs from the table you can update the records accordingly.
In the change query below the first parameter (vdc_id) is the organization you want the network to move to. You can usually find the proper id by noting the vcd_id of the record that has the organization name in the 'name' field. The second parameter (id) is the network you want to change. In other words, you are going to set the network's organization id field.
UPDATE vcloud.dbo.vdc_logical_resource
SET vdc_id = 0xabunchofUUIDnumbersfortheorganization
WHERE id = 0xabunchofUUIDnumbersforthenetwork;
Another unique issue we were seeing occurs when we try to disable sharing this network with other VDCs in the organization. If we try to uncheck the box in it's properties we get the following error in vCloud Director:
Index: 0, Size: 0
Not sure what this one is about. Seems it could be an out of bounds exception with the program accessing a list.
Friday, October 26, 2012
Using a Script to Add a Network Printer with Custom Drivers
You can use a simple .vbs file to add a printer to a Windows 7 computer. This script will create a new TCP/IP local port for use with the printer. Before you begin you will want to get ahold of the drivers that will allow this printer to be used on your target system (32 or 64bit).
Often the drivers are gathered in a folder of the download the manufacturer provides. You may need to sift through some of the files to find the correct .inf file for your printer. If you view the contents of the .inf file you may find a number of printers listed. It's important to note how your printer model is designated. You will need to specify the exact text of the printer listing in the /m switch of the printui.dll command.
One way to get the exact name of the printer you need to specifiy is to start the installation on a test system using the "Add Printer" wizard in "Devices and Printers". Once you get to the "Install the printer driver" step click on "Have Disk..." Browse to the .inf file and hit OK. You will be presented with a list of printers. Using the exact text of one of options presented in the selection box as your printer listing text will achieve the desired result. You can cancel out of the wizard once you have the text copied to your script.
Let's assume the following scenerio:
- We have a networked Dell 5350 at network address 192.168.1.70
- The printer is in room 207 of your building
- You have pushed the drivers for the printer to a local folder (C:\Drivers\Dell 5350dn) on your target system.
- You want the printer name to show up as "Room 207 - Dell Laser"
- The local port name will match the IP address.
Save the following into a text file and rename the file extention to .vbs
Set WSHNetwork = WScript.CreateObject("WScript.Network")
set shell = WScript.CreateObject( "WScript.Shell" )
CompName = shell.ExpandEnvironmentStrings("%COMPUTERNAME%")
Set objWMIService = GetObject("winmgmts:\\" & CompName & "\root\cimv2")
Set objNewPort = objWMIService.Get("Win32_TCPIPPrinterPort").SpawnInstance_
Set oShell = WScript.CreateObject("WScript.shell")
Set objPrinter = objWMIService.Get("Win32_Printer").SpawnInstance_
sub createPort (name, ip)
objNewPort.Name = name
objNewPort.Protocol = 1
objNewPort.HostAddress = ip
objNewPort.SNMPEnabled = False
objNewPort.Put_
end sub
'-- Call the create port function with the address and port name parameters
createPort "192.168.1.70", "192.168.1.70"
oShell.run "cmd /K rundll32 printui.dll,PrintUIEntry /if /f ""C:\Drivers\Dell 5350dn\DKACLC40.inf"" /n ""Room 207 - Dell Laser"" /m ""Dell 5350dn Laser Printer"" /r 192.168.1.70 /b ""Room 207 - Dell Laser"" /q"
Set oShell = Nothing
Check your "Devices and Printers" window to see if the new printer has appeared.
If you are finding that something is not working but you see that the port was created you can try just the following command on the target system.
rundll32 printui.dll,PrintUIEntry /if /f "C:\Drivers\Dell 5350dn\DKACLC40.inf" /n "Room 207 - Dell Laser" /m "Dell 5350dn Laser Printer" /r 192.168.1.70 /b "Room 207 - Dell Laser"
If things aren't working with just the command you may get an error in the form of 0x00000*. This can often indicate that the driver file specified can't be found or is invalid.
Monday, October 22, 2012
Password-less SSH Connections & What Can Go Wrong
So, you would like to jump from server to sever over SSH without being prompted for a password each time. Or perhaps you have a script or application that needs to access information on a remote server via a SSH tunnel. Using SSH shared key authentication makes this possible.
Shared key authentication is easy enough to set up, but there are a couple of pitfalls that can have you pulling out your hair if things don't work after you have followed the instructions exactly.
Let's consider the following two server scenario:
Server 1: AIR
Server 2: WATER
Suppose there is a user "bill" on both servers. Bill usually works on AIR and frequently needs to perform tasks on WATER, he would like to use shared key authentication.
AIR has a /etc/hosts file entry for WATER and vice versa.
First Bill logs onto AIR and issues the following command
ssh-keygen -t rsa
Now might be a good time to mention that there are other options you could use the ssh-keygen command with to create different types of keys with a variety of key lengths. For this example we'll just use the default rsa key type.
When you hit enter after this command you will see:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bill/.ssh/id_rsa):
Accept this default path and hit enter.
You'll be prompted for a pass phrase, once to create it then again to verify, leave it blank and hit enter each time.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/bill/.ssh/id_rsa.
Your public key has been saved in /home/bill/.ssh/id_rsa.pub.
The key fingerprint is:
XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX bill@air
For this next step to work there should be at least a ~/.ssh/ directory on the WATER server under bill's account. An easy way to achieve this is to perform the same key generation sequence on WATER that we did on AIR. Once this is done we'll have .ssh directories on each of the servers. The next step is to add the public key of the server we are coming from to the authorized_keys file of the server we are going to.
When we are on AIR we can issue the following command:
cat ~/.ssh/id_rsa.pub | ssh bill@WATER "cat - >> ~/.ssh/authorized_keys"
You might be prompted to store the remote server's RSA info to the local system if this is the first time you are connecting. Bill will also be prompted for his WATER password this time.
This command will load the local (AIR) public key into the remote (WATER) server's authorized_keys file. From this point forward bill's SSH authentication from AIR to WATER can now be handled through a key exchange.
Now if bill is on AIR and issues a simple SSH command:
ssh WATER
He will be granted a direct password-less connection. This is great if bill wants to establish a script using scp to copy items from AIR to WATER, the script won't prompt for a password.
Congratulations! Oh.. wait.. it didn't work ??
Now.. what to check if things don't work.
There are a few not so obvious things that can leave you pulling your hair out.
Improper permissions are usually the most obscure type of issue.
Hopefully these tips get you up and running, if you find another issue please tell us about it.
Shared key authentication is easy enough to set up, but there are a couple of pitfalls that can have you pulling out your hair if things don't work after you have followed the instructions exactly.
Let's consider the following two server scenario:
Server 1: AIR
Server 2: WATER
Suppose there is a user "bill" on both servers. Bill usually works on AIR and frequently needs to perform tasks on WATER, he would like to use shared key authentication.
AIR has a /etc/hosts file entry for WATER and vice versa.
First Bill logs onto AIR and issues the following command
ssh-keygen -t rsa
Now might be a good time to mention that there are other options you could use the ssh-keygen command with to create different types of keys with a variety of key lengths. For this example we'll just use the default rsa key type.
When you hit enter after this command you will see:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bill/.ssh/id_rsa):
Accept this default path and hit enter.
You'll be prompted for a pass phrase, once to create it then again to verify, leave it blank and hit enter each time.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/bill/.ssh/id_rsa.
Your public key has been saved in /home/bill/.ssh/id_rsa.pub.
The key fingerprint is:
XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX bill@air
For this next step to work there should be at least a ~/.ssh/ directory on the WATER server under bill's account. An easy way to achieve this is to perform the same key generation sequence on WATER that we did on AIR. Once this is done we'll have .ssh directories on each of the servers. The next step is to add the public key of the server we are coming from to the authorized_keys file of the server we are going to.
When we are on AIR we can issue the following command:
cat ~/.ssh/id_rsa.pub | ssh bill@WATER "cat - >> ~/.ssh/authorized_keys"
You might be prompted to store the remote server's RSA info to the local system if this is the first time you are connecting. Bill will also be prompted for his WATER password this time.
This command will load the local (AIR) public key into the remote (WATER) server's authorized_keys file. From this point forward bill's SSH authentication from AIR to WATER can now be handled through a key exchange.
Now if bill is on AIR and issues a simple SSH command:
ssh WATER
He will be granted a direct password-less connection. This is great if bill wants to establish a script using scp to copy items from AIR to WATER, the script won't prompt for a password.
Congratulations! Oh.. wait.. it didn't work ??
Now.. what to check if things don't work.
There are a few not so obvious things that can leave you pulling your hair out.
Improper permissions are usually the most obscure type of issue.
- Permissions on your home directory. The user you are remoting as should be able to read the remote user's home directory. Think about our example above, Bill may not have the same user number on both servers. If the numbers are different then Bill looks like a different person on each server OS. You can achieve this via a group or public permission depending on your security needs.
- Permissions on the authorized_keys file should be set to:
-rw-r--r-- in other words: 644
Also, you (bill in this case) should own these files, respectively on each server.
- If you have copied keys manually by copy and pasting or FTP, you may have broken the key, check for proper file encoding and stray characters or carriage returns.
- If you are setting this up on a root account you should check the /etc/ssh/sshd_config file to ensure the PermitRootLogin without-password option is enabled.
- If you have your home directory NFS 4 mounted on the remote server you'll need to use something like rcp.idmapd to ensure the user and group ownership information is associated correctly on the remote system. The default is usually to place the nobody user and group as the owner of the files. A quick fix is to add the specify vers=3 in the in the fstab file mount option.
Hopefully these tips get you up and running, if you find another issue please tell us about it.
Monday, October 15, 2012
Sendmail 550 Access denied with 127.0.0.1 Relay
Ran into this little sendmail issue today on a CentOS box.
My Linux server was configured to relay mail to a main corporate exchange server. The M4 configuration already had a proper SMART_HOST configured.
I was doing a simple test with sendmail and was getting an access denied error like this:
[root@server]# sendmail -v root
ppp
ppp
.
root... Connecting to [127.0.0.1] via relay...
220 server.com ESMTP Sendmail 8.13.8/8.13.8; Mon, 15 Oct 2012 13:41:19 -0400
>>> EHLO server.com
250-server.com Hello server.com [127.0.0.1], pleased to meet you
250 ENHANCEDSTATUSCODES
>>> MAIL From:<user@server.com>
550 5.0.0 Access denied
user... Using cached ESMTP connection to [127.0.0.1] via relay...
>>> RSET
250 2.0.0 Reset state
>>> MAIL From:<>
550 5.0.0 Access denied
postmaster... Using cached ESMTP connection to [127.0.0.1] via relay...
>>> RSET
250 2.0.0 Reset state
>>> MAIL From:<>
550 5.0.0 Access denied
Closing connection to [127.0.0.1]
>>> QUIT
221 2.0.0 server.com closing connection
This shows us that the first hop in the relay process is to itself. This was where it was failing.
The fix was to add the following line to /etc/hosts.allow file:
sendmail: ALL :allow
This cleared up the issue and got mail flowing once again. This works since the hosts entries also apply to traffic for system daemons.
Viewing communication using the sendmail -v command is one way to view what's happening, however in my case once things were working again with the local relay I needed to look at the /var/log/maillog file to see that messages were making the next hop out to the corporate mail server. I could then see the relay=corporate.mail.server.com. in the logs with a status of "Sent"
My Linux server was configured to relay mail to a main corporate exchange server. The M4 configuration already had a proper SMART_HOST configured.
I was doing a simple test with sendmail and was getting an access denied error like this:
[root@server]# sendmail -v root
ppp
ppp
.
root... Connecting to [127.0.0.1] via relay...
220 server.com ESMTP Sendmail 8.13.8/8.13.8; Mon, 15 Oct 2012 13:41:19 -0400
>>> EHLO server.com
250-server.com Hello server.com [127.0.0.1], pleased to meet you
250 ENHANCEDSTATUSCODES
>>> MAIL From:<user@server.com>
550 5.0.0 Access denied
user... Using cached ESMTP connection to [127.0.0.1] via relay...
>>> RSET
250 2.0.0 Reset state
>>> MAIL From:<>
550 5.0.0 Access denied
postmaster... Using cached ESMTP connection to [127.0.0.1] via relay...
>>> RSET
250 2.0.0 Reset state
>>> MAIL From:<>
550 5.0.0 Access denied
Closing connection to [127.0.0.1]
>>> QUIT
221 2.0.0 server.com closing connection
This shows us that the first hop in the relay process is to itself. This was where it was failing.
The fix was to add the following line to /etc/hosts.allow file:
sendmail: ALL :allow
This cleared up the issue and got mail flowing once again. This works since the hosts entries also apply to traffic for system daemons.
Viewing communication using the sendmail -v command is one way to view what's happening, however in my case once things were working again with the local relay I needed to look at the /var/log/maillog file to see that messages were making the next hop out to the corporate mail server. I could then see the relay=corporate.mail.server.com. in the logs with a status of "Sent"
Friday, October 12, 2012
Slow SSH Connections
If you are SSH'ing to a server and having to wait for the user name and/or password prompt the issue could be more than just a slow connection. There are a couple of common things to check if you are having to wait anywhere from 10 seconds to over 1 minute to get your session established.
1. DNS reverse mapping not resolving:
The SSH server may be trying ot perform a reverse lookup on the client trying to connect. If DNS doesn't response quickly, either with the host name or a 'not found' reply, then this attempt will continue until it times out. Modify your /etc/ssh/sshd_config to:
UseDNS no
2. SSH may be trying too many authentication types:
SSH may be configured to try PAM, GSSAPI, or some flavor of shared key authentication. You can change the setting:
GSSAPIAuthentication no
If you are using PuTTY you may also want to check the settings there. If you are trying to connect using GSSAPI in putty but the server isn't set to use it then you will create a delay while it attempts this. One tell tale sign that GSSAPI is enabled on the client side and is failing is getting an "Access denied" message at the prompt (illustrated below) yet authentication eventually succeeds.
login as: user
Access denied
user@someserver.com's password:
Last login: Tue Oct 1 01:23:40 2012 from localhost.com
Uncheck the "Attempt GSSAPI authentication" box and see if this speeds things up.
1. DNS reverse mapping not resolving:
The SSH server may be trying ot perform a reverse lookup on the client trying to connect. If DNS doesn't response quickly, either with the host name or a 'not found' reply, then this attempt will continue until it times out. Modify your /etc/ssh/sshd_config to:
UseDNS no
2. SSH may be trying too many authentication types:
SSH may be configured to try PAM, GSSAPI, or some flavor of shared key authentication. You can change the setting:
GSSAPIAuthentication no
If you are using PuTTY you may also want to check the settings there. If you are trying to connect using GSSAPI in putty but the server isn't set to use it then you will create a delay while it attempts this. One tell tale sign that GSSAPI is enabled on the client side and is failing is getting an "Access denied" message at the prompt (illustrated below) yet authentication eventually succeeds.
login as: user
Access denied
user@someserver.com's password:
Last login: Tue Oct 1 01:23:40 2012 from localhost.com
Uncheck the "Attempt GSSAPI authentication" box and see if this speeds things up.
Fix Unstable Fedora Running on Cruical M4 SSD
Worked through an interesting experience with a SSD recently. Apparently the Cruical M4 2.5" SDD has a stability issue that occurs after 5000 hours of actual use.
Crucial admitted this issue here:
http://forum.crucial.com/t5/Solid-State-Drives-SSD/BSOD-Crucial-M4/td-p/79098
I saw this manifest itself in the form of an unstable Fedora system running on a 128GB version of the M4. When rebooted the system would appear stable until an hour passed at which point it steadily degraded. There were files that were coming up not found and command execution was giving an "input output error", which is usually indicative of the system not finding the command.
Using smartd we noticed that the "Power_On_Hours" RAW_VALUE was around 8000. The command we used was:
Just a side note but some other notable info in this output to indicate a possible drive failure are the "Reallocated_Sector_Ct" and the "Seek_Error_Rate".
The fix in this case was to upgrade the firmware. We did this and the system returned to a stable state. Crucial provides an ISO image you can boot to to perform the upgrade, this is a much better option if you are using it as your boot drive.
Crucial admitted this issue here:
http://forum.crucial.com/t5/Solid-State-Drives-SSD/BSOD-Crucial-M4/td-p/79098
I saw this manifest itself in the form of an unstable Fedora system running on a 128GB version of the M4. When rebooted the system would appear stable until an hour passed at which point it steadily degraded. There were files that were coming up not found and command execution was giving an "input output error", which is usually indicative of the system not finding the command.
Using smartd we noticed that the "Power_On_Hours" RAW_VALUE was around 8000. The command we used was:
smartctl -a /dev/sda
Just a side note but some other notable info in this output to indicate a possible drive failure are the "Reallocated_Sector_Ct" and the "Seek_Error_Rate".
The fix in this case was to upgrade the firmware. We did this and the system returned to a stable state. Crucial provides an ISO image you can boot to to perform the upgrade, this is a much better option if you are using it as your boot drive.
Tuesday, October 9, 2012
Getting the pesky browser plug-in to work for VMware Lab Manager
You've probably googled your eyes out trying to find an update, patch, or workaround to get the Lab Manger plugin to work with the browser or OS of your choice. In some cases it's a lost cause, VMware has dropped the product and moved toward vCloud Director. However, on almost all platforms there is a little hope you can get it / keep it working but you may need to make some compromises.
Below are some of the procedures we have found to alleviate some of the plug-in issues. We first review the issues on Windows then Linux.
A note about MAC based
Issues
At this time, the plugin is not natively supported on any MAC platform. You should consider running the plugin in IE on a Windows VM installed on your MAC using VMware Fusion.
At this time, the plugin is not natively supported on any MAC platform. You should consider running the plugin in IE on a Windows VM installed on your MAC using VMware Fusion.
A note about the Plugin Files
For a few of these fixes you should grab the plugin source files, these are sitting on the lab manager server. You will need them if you run through a manual installation.
WINDOWS
BASED SYSTEMS
In some cases the initial console browser plugin installation can be a bit troublesome. The problem can usually be overcome by correcting browser settings and performing a clean administrative installation. Outlined below is the easier process first (resetting the browser and installing as an admin) followed by a manual uninstall and clean installation.
In some cases the initial console browser plugin installation can be a bit troublesome. The problem can usually be overcome by correcting browser settings and performing a clean administrative installation. Outlined below is the easier process first (resetting the browser and installing as an admin) followed by a manual uninstall and clean installation.
Default
IE Settings and Install as Admin
Even though the plugin seems to be installed you may experience a browser crash or simply get a blank box where you expect to see the console window. Here are some fixes to try.
Even though the plugin seems to be installed you may experience a browser crash or simply get a blank box where you expect to see the console window. Here are some fixes to try.
Run IE administratively:
Right click on the IE shortcut in your start menu and select “run as administrator”
Go to: Tools → Internet Options → Advanced → Click on the “Reset” button.
Restart the browser as an administrator and try to access the console once again.
Add your lab manager website to the trusted sites list:
Go to Tools → Internet Options → Security →Trusted Sites Add the site
Be sure to enable the QuickMksAxCtl Class:
With IE closed, go to Control Panel → Internet Options → Programs Hit the “Manage Add-ons” button use the drop down menu to show downloaded controls and double click on the QuickMksCtl Class add-on. Select the button to allow it to run on all websites. Close this window then enable to add-on.
Manually uninstall the VMware console browser plugin from Internet Explorer
Run the following command as an administrator:
regsvr32 /s /u "C:\Program Files\Internet Explorer\PLUGINS\quickMksAx.dll"
regsvr32 /s /u "C:\Program Files\Internet Explorer\PLUGINS\quickMksAx.dll"
Delete the following files:
C:\Program Files\Internet Explorer\PLUGINS\msvcr71.dll
C:\Program Files\Internet Explorer\PLUGINS\quickmksax.inf
C:\Program Files\Internet Explorer\PLUGINS\ssleay32.dll
C:\Program Files\Internet Explorer\PLUGINS\vmware-remotemks.exe
C:\Program Files\Internet Explorer\PLUGINS\msvcr71.dll
C:\Program Files\Internet Explorer\PLUGINS\quickmksax.inf
C:\Program Files\Internet Explorer\PLUGINS\ssleay32.dll
C:\Program Files\Internet Explorer\PLUGINS\vmware-remotemks.exe
Manual
VMware console browser plugin installation for Internet
ExplorerBefore you begin the manual
installation remove the plugin using the manual uninstallation procedure.
Depending on your configuration you might run into system
permission issues if you expand the cabinet file and try to directly copy to
this folder using the GUI. If you experience this issue you may try opening the
command prompt as an administrator and using the ‘expand’ command.
expand -F:* C:\ClientSoftware\VMware-mks.cab
“C:\Program Files\Internet Explorer\PLUGINS”
To register the plugin, run the following
command as an administrator:
regsvr32 /s "C:\Program Files\Internet Explorer\PLUGINS\quickMksAx.dll"
regsvr32 /s "C:\Program Files\Internet Explorer\PLUGINS\quickMksAx.dll"
These VMware KB articles outline some of the above
procedures:
- Running as admin
- Adding security exceptions
- Failed installation, manual intervention
- Running as admin
- Adding security exceptions
- Failed installation, manual intervention
Manual Installation for Firefox
Note: At this time the console plugin is not compatible with
FireFox 4.
You may find 3.6 still available here: ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/3.6.28
Windows Linux
You may find 3.6 still available here: ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/3.6.28
Windows Linux
Use the plugin zip file, unzip the file's contents into:
%ProgramFiles%\Mozilla Firefox\plugins
Also, copy ssleay32.dll and libeay32.dll to
%ProgramFiles%\Mozilla Firefox
%ProgramFiles%\Mozilla Firefox
LINUX BASED SYSTEMS
Manual
Installation for Firefox 3
Note: At this time the console plugin is not compatible with
FireFox 4.
You may find 3.6 still available here: http://www.mozilla.com/en-US/firefox/all-older.html
You may find 3.6 still available here: http://www.mozilla.com/en-US/firefox/all-older.html
Download the zip file using the links above
Unzip the file's contents into
~/.mozilla/plugins
~/.mozilla/plugins
Using Mozilla Firefox on Linux to access the Lab Manager Web
console can cause problems with the console plugin.
There are a number of possible issues
and solutions:
In Firefox on Linux, if error messages appear when you try to use a virtual machine's console, you might not have all required libraries installed.
For RHEL 64bit, you need to install compat-libstdc+-33-3.2.3 on the setup (ideally using yum, which also installs libstdc+.so.5), and for Ubuntu, go to http://packages.debian.org/stable/base/libstdc++5 and install the missing library.
If Firefox reports that it could not install the plugin
(Cancelled -227), create a directory named "plugins" in $HOME/.mozilla on the client computer. Log in to Lab Manager and
install the plugin. Restart Firefox.
If Firefox reports LoadPlugin:
failed to initialize shared library /root/.mozilla/plugins/libmks.so, create a soft link to
libexpat.so.
Lab Manager Web console page shows an empty box in Mozilla
Firefox 3.6 on Linux
Some versions in Firefox 3.6 series strip executable permissions
on files that are extracted from the XPI plugin binary (see
http://blog.mozilla.com/addons/2010/01/22/broken-executables-in-extensions-in-firefox-3-6).
The console plugin does not load correctly and the console page appears blank. To resolve the issue, browse to the console plugin installation folder at
"/<Firefox_profile_folder>/extensions/VMwareMKSNPRTPlugin@vmware.com/plugins/" and run the command "chmod 755 *" to manually enable permissions on the files of that folder.
The console plugin does not load correctly and the console page appears blank. To resolve the issue, browse to the console plugin installation folder at
"/<Firefox_profile_folder>/extensions/VMwareMKSNPRTPlugin@vmware.com/plugins/" and run the command "chmod 755 *" to manually enable permissions on the files of that folder.
You
may find that the solution is to use Firefox 3.5 or below as 3.6 or higher doesn't work with the
VMware remote console plugin - since this is already not getting security
updates, it's best to install it separately to the main Firefox, and use a new
profile. To avoid messing up any Ubuntu version of firefox, just untar the Firefox
3.5 tar.gz under something like /opt/firefox-3.5
Here's
a shell script that invokes this Firefox with the right profile, even if you
have a more recent Firefox running (via the -no-remote):
#!/
bin/sh
# Run Firefox 3.5, for VMware 2.0 only
prog
=/opt/firefox-3.5/firefox/firefox
exec
$prog -no-remote -P vmware-FF3.5
After
you are done with the console, it's best to close the Firefox 3.5 instance,
otherwise links clicked in other applications may open in the 3.5
instance.
Subscribe to:
Posts (Atom)