Thursday, May 27, 2010

Platform: selinux and squid proxy (CentOS 5.x)

Today I banged my head against an issue where squid wasn't allowing access out for a user to a web site running on an oddball port (port 85 in this example). Step #1 to fix this I added port 85 to the Safe_ports acl (which technically enables it for every site but I'll tighten that up later) in /etc/squid/squid.conf. Then I opened up that port on the firewall running behind the squid proxy (this time to only the destination IP address of the oddball server running on port 85).

In testing the end user was consistently getting an Connection Failed error (13) Permission denied coming back from squid. In this case, selinux was kicking in and saying "hey, port 85 is stupid (duh) and squid really should not be doing this." To add an exception, use the semanage command to add the port as follows:

semanage port -a -t http_port_t -p tcp 85

And after that [4 hour ordeal of research and testing] it works.

Monday, November 02, 2009

Platform: Windows (tested specifically on Windows 2003 R2)

Ye old command line FTP still comes in handy sometimes. For example, I'm using VMWare Server 2.x on a 2k3R2 box (64 bit in this case), and I run a few Linux guests. Today I was going to grab the latest CentOS release, 5.4 from one of the mirrors. IE won't download such a large file, so I found a mirror I could reach via ftp.

I fired up my cmd.exe, changed into my drive letter where I keep my source CD and DVD images, and started up the ftp. About a third of the way through downloading the 4+ GB iso, my machine started telling me it was running out of space on the C: drive. Huh?

Well it turns out the command line ftp program Microsoft ships with the OS saves anything it downloads to the local temporary directory. It doesn't just stream the download down to the current drive like a Linux based ftp would do. And after tinkering I discovered it respects the local environment variable TMP (and not the one named TEMP), apparently echoing back some roots in BSD or something.

So, the solution was to open a command window, type SET TMP=V:\tmp where the V: drive and tmp folders exist, and try my ftp again. Alteratively some other commerical ftp app might not do this but why spend money when you get the basics you need for free? You just need to watch what you're doing.

Monday, September 28, 2009

Platform: Windows Server 2008 (and R2)

DHCP and Classless Routes: In my environment I use DHCP option 249 under my Window Server 2003 DHCP servers to provide routes to my XP clients. I use a heavily subnetted environment where we don't have all routes on even classful boundaries (subnet masks of 255.255.0.0 or 255.255.255.0 don't exist here, we have stuff like 255.255.254.0).

This past weekend I ported my DHCP scope over from a retiring 2003 server to a new 2008 server, and everything seemed to work fine using the following KB: http://support.microsoft.com/kb/962355. However, my scope option 249 was missing, my clients were missing their routes, and I had an option showing up as unknown on the new 2008 server.

I did a bunch of research, and frankly didn't find much until I ran across a Technet discussion about scope 121 on 2008, which is identical to 249 in 2008 and before. The catch is that buried in the documentation is this little backwards compatibility hack that says if a client asks for 249 and the DHCP server doesn't have one, it should lie and respond with the details of scope 121 so the client gets the routes. I deleted the unknown scope and details, recreated the routes under scope 121, and tested with an XP client using a release and renew and it worked. *whew*

The flipside to this adventure is that I have a couple of Vista and Windows 7 clients we're playing with and they are not getting routes in some subnets. Those subnets are served by 2003 DHCP servers, which don't have the option 121. 249 is there, but Vista and 7 don't ask or care about 249, so they are blind to the additional routes. I'll be working to upgrade those subnets to a 2008 based server so everything is happy.

I can't explain the change, but make sure you consider that option in your 2003->2008 transitions so you can avoid the headaches I did as clients renewed their DHCP leases and lost routes.

Wednesday, February 18, 2009

Platform: Openfiler V2.3

I use Openfiler as an iSCSI target for some dev servers, it gives me the ability to utilize some hard drives in an old Dell Poweredge 2550 server I have in my rack. It's not a bad box for using as a cheap low-performance SAN, since it has built in gigabit ethernet support on one NIC that's supported by Openfiler along with another 10/100 NIC that's also supported. I use the 10/100 for the admin interface and the Gigabit NIC to talk to my servers.

Anyhow, I'm always reconfiguring this thing, removing volumes and targets, etc. Openfiler is terribly buggy with the deletion aspects of things, often leaving stuff in config files. Today I was "clearing" the box down, and inadvertantly was left with no volumes yet I still had an iSCSI target defined pointing to a nonexistent volume. That last iSCSI target I was unable to remove through the web GUI.

So, the trick is to get a console on the box, and dig into the following file:

/opt/openfiler/etc/volumes.xml

You'll open it up and find something like this:

<?xml version="1.0" ?>
<volumes>
<volume id="iscsi-1" name="First Target" mountpoint="/mnt/volgroup1/iscsi-i/" vg="volgroup1" fstype="xfs" />
</volumes>

That volume id on the 3rd line is the thing that has to go. Fire up a text editor, remove that line (leave the rest intact), and save the file. Refresh your gui and start adding volumes now and the iSCSI targets won't be stuck anymore.

Tuesday, January 13, 2009

Platform: Windows Server 2003 (including R2)
Application: Windows Sharepoint Services (in this case version 2.0)

Best practices are to try upgrading in a lab before taking the plunge with new apps. We're in the process of moving our old Sharepoint Team Services app up to version 3.0, and I wanted to tinker with it in a lab environment before I went for it.

Fortunately, most of our environment is currently virtual machines (VMWare). So, I found a downtime to copy those guests over to my desktop, set up an isolated network (vmnet2) and bound the NICs to that network. I copied over my IIS server with sharepoint on it, Created a separate SQL Server and restored recent backups of the sharepoint databases, and one of our domain controllers/dns servers so the other 2 machines had a DC to talk to.

I kept getting the dreaded "#50070: Unable to connect to the database." error in the system log of the Sharepoint server for the Sharepoint Timing service. Lots of people get this error, and it's frequently related to connectivity issues. I dug through a lot of the options presented on EventID.Net and none seemed to match my situation.

Eventually I started checked the user permissions on the SQL server, and realized I couldn't browse the network. WTH! The Sharepoint server could see the network (remember this is my sandbox that really only has 3 machines). Then a big lightbulb came on when I realized I had not joined the SQL server to the sandbox network! I had created it from scratch, installed SQL server, restored my database backup, but never did I join it to the domain so all the Sharepoint services did not work, nor was I able to connect (got another error there).

So, if you ever find yourself in this situation, make sure the darn SQL Server is in the domain! :-)

Saturday, March 22, 2008

Platform: Windows XP

This past week I was working on a network integration for a business my current company bought. We inherited their outdated installation of MAS 200 from Best Software (ahem, the audacity). Anyhow, we replaced their "suitable only for m0n0wall" 6 year old beige-bomb computers with state o' the art HP Desktops with widescreen displays. Everything worked fine, reinstalling the software on the new profiles worked great.

Except there were "sporatic" problems. Several of the dialogs for MAS have scrolling windows of line items, and we were getting complaints that the users could not highlight the line items at times. Click and nothing happens.

Our local deputy-IT tech figured it out. It turns out this app cannot handle working on the new widescreen 16:10 displays. If the dialog box appears beyond the area that corresponds to the 4:3 section (approximately the left 2/3's of the display) and the user tries to click on the line items in the dialog box, the clicks are ignored. Move the dialog box to the left 2/3's of the screen, and it works. I've NEVER seen anything like this before. We even determined if the window was positioned on the imaginary Mason-Dixon line between the 4:3 and 16:10 spots, you could click the line items on the left hand side of the dialog, but not the right. Note that scrollbars on windows worked fine, as did buttons. Just the line items were affected.

So, the "solution" has a few possibilities. In some cases, we ran the widescreen displays with a 4:3 resolution (i.e. 1024X768 or 1154X864) which of course results in fat wide video that's not crisp. Religiously speaking, I'm opposed to this but many of the users liked the larger display and didn't mind the "fat" view. The other solution is to move the dialog boxes over when a user discovers the window isn't clickable. I'm sure there's a patch or update that negates this issue, but this app is going to be replaced by others in the near future so we're not interested in going through all the other maintenance headaches to change it.

So, if your windows don't work, did you recently switch to a widescreen display? Strange.

Thursday, January 18, 2007

Platform: Windows XP, 2003, Vista (PowerShell)

If you're not using Windows PowerShell, you should be. Here's a way I figured out how to flatten a path into a single directory. In my case I had lots of pictures in different folders that had unique names, so it was a matter of doing the following:
gci --recurse . *.jpg | cpi -dest 'c:\flat'

gci, or get-child-item, gets all the files in this directory. It follows all subdirectories with the "--recurse" option. I ran this in the current directory, hence the "." and I just wanted all my JPEG pictures.

Then, pipe it out to cpi cmdlet (copy item) with a -dest and your destination path. It worked GREAT. In the past on Windows I've either done this with Cygwin or with other utilities like XXCOPY ($$$). Free is good, and I'm just getting my feet wet in PowerShell now.

Other sites have discussed this as well.