samedi 27 juin 2015

What does this do? NVL (m-date), to-char(sysdate)

Hello im new to this site I have some questions to ask. Thank you.

What does this do? NVL (m-date), to-char(sysdate)

MySQL Case sensitive insert

I have created a database with case sensitive character set

Create database Grupo88 character set utf8 collate utf8_bin;

CREATE TABLE Grupo88.Usuarios(
    nombreUsuario varchar(30) primary key,
    clave varchar(120) not null);

INSERT INTO usuarios(nombreUsuario,clave)
       VALUES('guy','pass');
INSERT INTO usuario(nombreUsuario, clave)
       VALUES('Guy', 'password');

The first insert goes well, but the second one says that the value "Guy" already exists. Setting my database to be case sensitive is not enough? How can I do to allow case sensitive inserts?

Thanks

MYSQL limit returns unexpected results

<?php
$quuuu = mysql_query("SELECT * FROM products limit 9,15 ") or die("Query Error");
while($ffff=mysql_fetch_array($quuuu)){
    echo "<li><a href='view.php?id=" . $ffff['id'] . "'>" . $ffff['title'] . "</a></li>";
}
echo mysql_num_rows($quuuu);
?>

its should return (7), and the result is (15)

SQL JOIN to get two records for each row

I have two tables, say

TABLE: Transactions

COLUMNS: sender_id, receiver_id, value

and

TABLE: Users

COLUMNS: user_id, username

Is it possible to make a SQL statement to JOIN two rows from the users table for each record in the transactions table? So for each result row, I should have two usernames from the users table, but with two different aliases. How to achieve this?

Thanks

How do I add records from an Access database into an existing MSSQL database?

I have an existing Access database that has out grown it's usefulness. I have written a Visual Basic program that uses a SQL database and have been re-entering the data from the Access file into the SQL file individually. I still have 300+ records to move and would like to find a way to do all this using either a data snippet in Visual Basic, or using SQL Express. I'm smart enough to set up and use Access, and smart enough to Create a program and database in Visual Basic and SQL but for some reason I not smart enough to move the records between the 2 databases. When I originally created the .mdf file I attempted to move the records at that time but everything I tried didn't work so I figured I'd get it later, but now I have 300+ records in the .mdf file and need to get the others moved over. Anybody have any ideas?

Selecting max values of a column using the max value of another column

Hi folks I'm writing a query in MYSQL and it has challenged me. Here is the situation;

Let's assume I have a table named 'status' which keeps data for a tv show like that;

+---------+---------+---------+---------+
|   id    |  season | episode | channel |
+---------+---------+---------+---------+
|   1     |    2    |    10   |    a    |
|   1     |    3    |    2    |    b    |
|   1     |    2    |    9    |    c    |
|   1     |    3    |    1    |    d    |
|   1     |    3    |    2    |    e    |
+---------+---------+---------+---------+

I want to retrieve the rows which contains the last released episode of the last season. According to the table above, I expect a result like that

+---------+---------+---------+
|  season | episode | channel |
+---------+---------+---------+
|    3    |    2    |    b    |
|    3    |    2    |    e    |
+---------+---------+---------+ 

The max value of the season column is 3. In this case, I have retrieved the rows that have the max value of the episode column where season is equal to 3. I have written a query and it gives the expected result, but I don't think that it is an appropriate query. How could I improve the query below? If needed, I can add extra information or give further examples.

SELECT season, 
       episode, 
       channel 
FROM   `status` 
WHERE  `tvseriesid` = 1 
       AND `season` = (SELECT Max(season) AS Son 
                       FROM   `status` 
                       WHERE  `tvseriesid` = 1) 
       AND `episode` = (SELECT Max(episode) 
                        FROM   `status` 
                        WHERE  `tvseriesid` = 1 
                               AND `season` = (SELECT Max(season) AS Son 
                                               FROM   `status` 
                                               WHERE  `tvseriesid` = 1)) 

how do I create a user for internet and administration on MS SQL?

I am a knew to creating users on MS SQL. So Here is what I am trying to do.

I am a master user on the Database. I can create users and databases and whatever.

I want to create two more user

1) Administrator2: This user will execute all tasks or creating, dropping, modifying data, tables and procedures on the database. It would be the same as the db_owner, however this user CANNOT be able to drop the database. (My last admin dropped my database by mistake, I know not a very good admin).

2) WebUser: This user will be used by my application so it must be able to read, write, execute procedures, etc.

Is there a pre-made schema or something for me to achieve this? If not how can I achieve this?

I already know that I have to create Logins for the server access and users for the Database. But a good simple steps would be appreciated.

MySQL keeps complaining about foreign key. Error 150

I'm trying to create a table with a foreign key to itself. I want to model a category that can be a child of another category. Here is my SQL:

CREATE TABLE IF NOT EXISTS `recipes`.`category` (
  `id` INT NOT NULL COMMENT '',
  `name` VARCHAR(60) NOT NULL COMMENT '',
  `description` VARCHAR(255) NULL COMMENT 'Description of the recipe category.',
  `parent_id` INT NULL COMMENT '',
  PRIMARY KEY (`id`)  COMMENT '',
  CONSTRAINT `parent_id`
    FOREIGN KEY (`id`)
    REFERENCES `recipes`.`category` (`id`)
    ON DELETE NO ACTION
    ON UPDATE NO ACTION);

However, MySQL keeps giving me an error:

Can't create table 'recipes.category' (errno: 150)

I'm trying to find out what I'm doing wrong, can anyone give me a hint? Docs say that:

Cannot create table. If the error message refers to error 150, table creation failed because a foreign key constraint was not correctly formed.

However, this does not help me much.

Combining 2 queries - getting column names in one and using results in another query

Building my first MS Access SQL queries. That should not be this hard!
I have 2 tables:

Data table AccessRights table

A user belonging to GroupA logged in. I want to show him only those Data table rows and columns which GroupA is assigned to, like this:

╔════════╦════════╦════════╗
║ Group  ║  Data3 ║ Data4  ║
╠════════╬════════╬════════╣
║ GroupA ║   9    ║   4    ║ 
╠════════╬════════╬════════╣
║ GroupA ║   1    ║   5    ║
╚════════╩════════╩════════╝

I tried this silly option:

SELECT (select Data from AccessRights where GroupA = "y")
FROM Data
WHERE Data.Group = "GroupA";

How to use data from stored procedure into second procedure?

The last few months I learned so much thanks to all the topics on the forum, thanks to everybody taking the effort answering them. They helped me to create my procedures but now I'm stuck. And related topics haven't given me a solution so far.

Also looked at this page: http://ift.tt/1hyhgrb

Let me explain the situation:

I have created two procedures they work perfectly. They work with temperary tables.

Below I have written out as specific as possible (I hope). If further information is needed I can give this of course.

The two procedures:

1 - Calculations of costs made by own drivers, trucks, fuel consumption > named CostOM

2 - Calculations of revenue and costs on the trip (Revene of orders, Haulage cost, toll cost etc.) > named Trip_margin

What I need to add to the second procedure (Trip_margin) is the calculated amount from the first procedure (CostOM) that is in a column named 'CostOwnMat' (which stands for cost of own material).

Both procedures use the same primary key structure and that is the tripnumber.

They only difference is that procedure 1 can have the same tripnumber multiple times because a trip can be done be 2 or more drivers.

So I need the total sum of the 'CostOwnMat' and return this in the second procedure in a new column named 'total_cost_OM'.

When I have that then I have al the revenue, costs etc. on the trip needed to calculate the margin made on the trip.

I think below is the direction of the solution but how to apply this?

• Table variables (DECLARE @t TABLE) are visible only to the connection that creates it, and are deleted when the batch or stored procedure ends.

• Local temporary tables (CREATE TABLE #t) are visible only to the connection that creates it, and are deleted when the connection is closed.

• Global temporary tables (CREATE TABLE ##t) are visible to everyone, and are deleted when all connections that have referenced them have closed.

• Tempdb permanent tables (USE tempdb CREATE TABLE t) are visible to everyone, and are deleted when the server is restarted.

How to loop through many to many?

I have a query that returns data like this: 




   +-----------+---------------+---------------+--------------+
    | recipe_id | name          | ingredient_id | item         |
    +-----------+---------------+---------------+--------------+
    |         1 | Apple Crumble |             1 | apple        |
    |         1 | Apple Crumble |             5 | flour        |
    |         1 | Apple Crumble |             7 | butter       |
    |         1 | Apple Crumble |             8 | sugar        |
    |         2 | Fruit Salad   |             6 | fruit juice  |
    |         2 | Fruit Salad   |             2 | banana       |
    |         2 | Fruit Salad   |             1 | apple        |
    |         2 | Fruit Salad   |             3 | kiwi fruit   |
    |         2 | Fruit Salad   |             4 | strawberries |
    +-----------+---------------+---------------+--------------+

How do i loop through and remove the duplicate records but keep the items for each? so something like this:

1 Apple crumble 1 apple, 5 flour, ... .. ..

How to import PHPMyAdmin SQL Dump File into mySQL DB?

I know how to import .sql files into mysql using the following command:

mysql -u root -p <db-name> < <sql-file>

But I am now looking at a PHP project which come with the following file: databases.php.

How do I import this file (i.e apply all the SQL statements in it) to mySQL?

The databases.php includes the following SQL:

-- phpMyAdmin SQL Dump
-- version 3.2.0.1
-- http://ift.tt/HXFLWR
--
-- Host: localhost
-- Generation Time: Aug 05, 2010 at 06:37 PM
-- Server version: 5.1.36
-- PHP Version: 5.3.0

SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";

--
-- Database: `chapter11`
--

-- --------------------------------------------------------

--
-- Table structure for table `comments`
--

CREATE TABLE IF NOT EXISTS `comments` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `comment` longtext NOT NULL,
  `profile_post` int(11) NOT NULL,
  `creator` int(11) NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `approved` tinyint(1) NOT NULL DEFAULT '1',
  PRIMARY KEY (`ID`),
  KEY `profile_post` (`profile_post`,`creator`,`approved`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;

--
-- Dumping data for table `comments`
--

INSERT INTO `comments` (`ID`, `comment`, `profile_post`, `creator`, `created`, `approved`) VALUES
(1, 'This is a test comment', 1, 1, '2010-05-13 18:01:29', 1);

-- --------------------------------------------------------

--
-- Table structure for table `controllers`
--

CREATE TABLE IF NOT EXISTS `controllers` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `controller` varchar(255) NOT NULL,
  `active` tinyint(1) NOT NULL DEFAULT '1',
  PRIMARY KEY (`ID`),
  UNIQUE KEY `controller` (`controller`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ;

--
-- Dumping data for table `controllers`
--

INSERT INTO `controllers` (`ID`, `controller`, `active`) VALUES
(1, 'authenticate', 1),
(2, 'members', 1),
(3, 'relationship', 1),
(4, 'relationships', 1),
(5, 'profile', 1),
(6, 'calendar', 1),
(7, 'stream', 1),
(8, 'messages', 1),
(9, 'groups', 1),
(10, 'group', 1),
(11, 'api', 1);

-- --------------------------------------------------------

--
-- Table structure for table `events`
--

CREATE TABLE IF NOT EXISTS `events` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `creator` int(11) NOT NULL,
  `name` varchar(255) NOT NULL,
  `description` longtext NOT NULL,
  `event_date` date NOT NULL,
  `start_time` time NOT NULL,
  `end_time` time NOT NULL,
  `type` enum('public','private') NOT NULL,
  `active` tinyint(1) NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;

--
-- Dumping data for table `events`
--


-- --------------------------------------------------------

--
-- Table structure for table `event_attendees`
--

CREATE TABLE IF NOT EXISTS `event_attendees` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `event_id` int(11) NOT NULL,
  `user_id` int(11) NOT NULL,
  `status` enum('invited','going','not going','maybe') NOT NULL,
  PRIMARY KEY (`ID`),
  KEY `status` (`status`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;

--
-- Dumping data for table `event_attendees`
--


-- --------------------------------------------------------

--
-- Table structure for table `groups`
--

CREATE TABLE IF NOT EXISTS `groups` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) NOT NULL,
  `description` longtext NOT NULL,
  `creator` int(11) NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `type` enum('public','private','private-member-invite','private-self-invite') NOT NULL,
  `active` tinyint(1) NOT NULL DEFAULT '1',
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ;

--
-- Dumping data for table `groups`
--

INSERT INTO `groups` (`ID`, `name`, `description`, `creator`, `created`, `type`, `active`) VALUES
(1, 'test group', 'test group about xyz', 1, '2010-08-02 01:15:53', 'public', 1),
(2, 'Dinosaur Activities in the North East', 'Group dedicated to the promotion of dinosaur friendly activities in the North East of England', 1, '2010-08-02 02:14:24', 'public', 1);

-- --------------------------------------------------------

--
-- Table structure for table `group_membership`
--

CREATE TABLE IF NOT EXISTS `group_membership` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `group` int(11) NOT NULL,
  `user` int(11) NOT NULL,
  `approved` tinyint(1) NOT NULL DEFAULT '0',
  `requested` tinyint(1) NOT NULL DEFAULT '0',
  `invited` tinyint(1) NOT NULL DEFAULT '0',
  `requested_date` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  `invited_date` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  `join_date` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
  `inviter` int(11) NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;

--
-- Dumping data for table `group_membership`
--

INSERT INTO `group_membership` (`ID`, `group`, `user`, `approved`, `requested`, `invited`, `requested_date`, `invited_date`, `join_date`, `inviter`) VALUES
(1, 2, 2, 1, 0, 0, '0000-00-00 00:00:00', '0000-00-00 00:00:00', '0000-00-00 00:00:00', 0);

-- --------------------------------------------------------

--
-- Table structure for table `ikes`
--

CREATE TABLE IF NOT EXISTS `ikes` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `type` enum('likes','dislikes') NOT NULL,
  `status` int(11) NOT NULL,
  `iker` int(11) NOT NULL,
  `iked` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ;

--
-- Dumping data for table `ikes`
--

INSERT INTO `ikes` (`ID`, `type`, `status`, `iker`, `iked`) VALUES
(1, 'likes', 6, 1, '2010-06-23 00:15:32'),
(2, 'dislikes', 6, 2, '2010-06-23 00:15:32'),
(3, 'likes', 6, 3, '2010-06-23 00:15:38');

-- --------------------------------------------------------

--
-- Table structure for table `messages`
--

CREATE TABLE IF NOT EXISTS `messages` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `sender` int(11) NOT NULL,
  `recipient` int(11) NOT NULL,
  `sent` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `read` tinyint(1) NOT NULL,
  `subject` varchar(255) NOT NULL,
  `message` longtext NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ;

--
-- Dumping data for table `messages`
--

INSERT INTO `messages` (`ID`, `sender`, `recipient`, `sent`, `read`, `subject`, `message`) VALUES
(1, 2, 3, '2010-06-27 23:19:41', 1, 'test', 'test msg'),
(4, 2, 1, '2010-06-04 16:26:29', 1, 'Saturday?', 'Are you still up for going hill walking with Mr. Glen on Saturday; let me know if you do need to borrow my t-rex leash, as I have a spare one.\r\n<br />\r\nCheers,<br />\r\nRick'),
(3, 2, 1, '2010-06-01 16:25:57', 1, 'Check out this link', ''),
(5, 3, 1, '2010-06-10 16:26:42', 1, 'Hi', ''),
(6, 1, 2, '2010-06-30 17:12:27', 0, 'Re: Saturday?', 'Yes!');

-- --------------------------------------------------------

--
-- Table structure for table `posts`
--

CREATE TABLE IF NOT EXISTS `posts` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `topic` int(11) NOT NULL,
  `post` longtext NOT NULL,
  `creator` int(11) NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;

--
-- Dumping data for table `posts`
--

INSERT INTO `posts` (`ID`, `topic`, `post`, `creator`, `created`) VALUES
(1, 1, 'We are planning on arranging a regular walk around the riverside park with our T-Rex''s - anyone want to join us?', 1, '2010-07-15 12:20:22'),
(2, 2, 'This is another new topic', 1, '2010-07-20 02:50:52'),
(3, 0, 'this is a test reply', 1, '2010-08-02 03:08:51'),
(4, 2, 'this is a test reply', 1, '2010-07-21 02:50:52');

-- --------------------------------------------------------

--
-- Table structure for table `profile`
--

CREATE TABLE IF NOT EXISTS `profile` (
  `user_id` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) NOT NULL,
  `dino_name` varchar(255) NOT NULL,
  `dino_dob` varchar(255) NOT NULL,
  `dino_breed` varchar(255) NOT NULL,
  `dino_gender` varchar(255) NOT NULL,
  `photo` varchar(255) NOT NULL,
  `bio` longtext NOT NULL,
  `user_dob` date NOT NULL,
  PRIMARY KEY (`user_id`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ;

--
-- Dumping data for table `profile`
--

INSERT INTO `profile` (`user_id`, `name`, `dino_name`, `dino_dob`, `dino_breed`, `dino_gender`, `photo`, `bio`, `user_dob`) VALUES
(1, 'Michael Peacock', 'Mr Glen', '01/01/1990', 'T-Rex', 'male', 'n663170160_722.jpg', 'I''m a web developer from the North East of England, running web design agency Peacock Carter a team of 4 Internet specialists.  I''ve also written a number of books, including, PHP 5 E-Commerce Development, Drupal 6 Social Networking, Selling Online with Drupal e-Commerce and Building Websites with TYPO3.', '1987-07-28'),
(2, 'Richard Thompson', 'Stu Fishman', '', 'stegosaurus', 'male', '', '', '1910-06-01');

-- --------------------------------------------------------

--
-- Table structure for table `relationships`
--

CREATE TABLE IF NOT EXISTS `relationships` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `type` int(11) NOT NULL,
  `usera` int(11) NOT NULL,
  `userb` int(11) NOT NULL,
  `accepted` tinyint(1) NOT NULL,
  PRIMARY KEY (`ID`),
  KEY `type` (`type`,`usera`,`userb`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;

--
-- Dumping data for table `relationships`
--

INSERT INTO `relationships` (`ID`, `type`, `usera`, `userb`, `accepted`) VALUES
(1, 3, 1, 2, 1);

-- --------------------------------------------------------

--
-- Table structure for table `relationship_types`
--

CREATE TABLE IF NOT EXISTS `relationship_types` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) NOT NULL,
  `plural_name` varchar(255) NOT NULL,
  `active` tinyint(1) NOT NULL DEFAULT '1',
  `mutual` tinyint(1) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ;

--
-- Dumping data for table `relationship_types`
--

INSERT INTO `relationship_types` (`ID`, `name`, `plural_name`, `active`, `mutual`) VALUES
(1, 'Friend', 'friends', 1, 1),
(2, 'Colleague', 'colleagues', 1, 1),
(3, 'Jogging buddy', 'Jogging buddies', 1, 1);

-- --------------------------------------------------------

--
-- Table structure for table `settings`
--

CREATE TABLE IF NOT EXISTS `settings` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `key` varchar(255) NOT NULL,
  `value` longtext NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ;

--
-- Dumping data for table `settings`
--

INSERT INTO `settings` (`ID`, `key`, `value`) VALUES
(1, 'view', 'default'),
(2, 'sitename', 'DINO SPACE!'),
(3, 'siteurl', 'http://localhost/mkpbook5/trunk/chapter11/'),
(4, 'captcha.enabled', '0'),
(5, 'upload_path', 'c:/wamp/www/mkpbook5/trunk/chapter11/uploads/');

-- --------------------------------------------------------

--
-- Table structure for table `statuses`
--

CREATE TABLE IF NOT EXISTS `statuses` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `update` longtext NOT NULL,
  `type` int(255) NOT NULL,
  `poster` int(11) NOT NULL,
  `profile` int(11) NOT NULL,
  `posted` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`ID`),
  KEY `poster` (`poster`,`profile`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=34 ;

--
-- Dumping data for table `statuses`
--

INSERT INTO `statuses` (`ID`, `update`, `type`, `poster`, `profile`, `posted`) VALUES
(1, 'Test ABC', 1, 1, 0, '2010-05-13 17:40:52'),
(2, 'Look at this', 0, 1, 0, '2010-05-02 12:31:20'),
(3, 'Test - 1.2.3.4', 1, 1, 1, '2010-05-13 17:41:03'),
(4, 'This is an update on someones profile', 1, 2, 1, '2010-06-02 21:53:34'),
(5, 'This is another update on someones profile', 1, 1, 2, '2010-06-02 21:53:37'),
(6, 'Nice to see you on here!', 1, 3, 1, '2010-06-22 22:20:43'),
(12, 'Taking my Dino out for a walk', 1, 1, 1, '2010-06-27 21:30:10'),
(32, 'Dinosaurs! I loved this show!', 3, 1, 1, '2010-07-02 23:38:39'),
(33, 'Really useful site!', 4, 1, 1, '2010-07-02 23:52:56'),
(30, 'I''m on stage rehearsing!', 2, 1, 1, '2010-07-02 23:02:42');

-- --------------------------------------------------------

--
-- Table structure for table `statuses_images`
--

CREATE TABLE IF NOT EXISTS `statuses_images` (
  `id` int(11) NOT NULL,
  `image` varchar(255) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

--
-- Dumping data for table `statuses_images`
--

INSERT INTO `statuses_images` (`id`, `image`) VALUES
(30, '1278108160_2.JPG');

-- --------------------------------------------------------

--
-- Table structure for table `statuses_links`
--

CREATE TABLE IF NOT EXISTS `statuses_links` (
  `id` int(11) NOT NULL,
  `URL` varchar(255) NOT NULL,
  `description` varchar(255) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

--
-- Dumping data for table `statuses_links`
--

INSERT INTO `statuses_links` (`id`, `URL`, `description`) VALUES
(33, 'http://ift.tt/O2rMYp', 'T-Rex on Wikipedia');

-- --------------------------------------------------------

--
-- Table structure for table `statuses_videos`
--

CREATE TABLE IF NOT EXISTS `statuses_videos` (
  `id` int(11) NOT NULL,
  `video_id` varchar(50) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;

--
-- Dumping data for table `statuses_videos`
--

INSERT INTO `statuses_videos` (`id`, `video_id`) VALUES
(32, 'BkAEH6uX7hQ');

-- --------------------------------------------------------

--
-- Table structure for table `status_types`
--

CREATE TABLE IF NOT EXISTS `status_types` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `type_name` varchar(100) NOT NULL,
  `type_reference` varchar(50) NOT NULL,
  `active` tinyint(1) NOT NULL DEFAULT '1',
  `type_name_other` varchar(255) NOT NULL,
  PRIMARY KEY (`ID`),
  UNIQUE KEY `type_reference` (`type_reference`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;

--
-- Dumping data for table `status_types`
--

INSERT INTO `status_types` (`ID`, `type_name`, `type_reference`, `active`, `type_name_other`) VALUES
(1, 'Changed their status to', 'update', 1, ''),
(2, 'Posted an image', 'image', 1, ''),
(3, 'Uploaded a video', 'video', 1, ''),
(4, 'Posted a link', 'link', 1, '');

-- --------------------------------------------------------

--
-- Table structure for table `topics`
--

CREATE TABLE IF NOT EXISTS `topics` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) NOT NULL,
  `creator` int(11) NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `active` tinyint(1) NOT NULL DEFAULT '1',
  `group` int(11) NOT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=MyISAM  DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ;

--
-- Dumping data for table `topics`
--

INSERT INTO `topics` (`ID`, `name`, `creator`, `created`, `active`, `group`) VALUES
(1, 'Walk through Riverside Park, Chester-le-Street', 1, '2010-07-15 12:20:22', 1, 2),
(2, 'This is another new topic', 1, '2010-07-20 02:50:52', 1, 2);

-- --------------------------------------------------------

--
-- Table structure for table `users`
--

CREATE TABLE IF NOT EXISTS `users` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `username` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `password_hash` varchar(40) COLLATE utf8_unicode_ci NOT NULL,
  `password_salt` varchar(5) COLLATE utf8_unicode_ci NOT NULL,
  `email` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `active` tinyint(1) NOT NULL DEFAULT '0',
  `admin` tinyint(1) NOT NULL DEFAULT '0',
  `banned` tinyint(1) NOT NULL DEFAULT '0',
  `reset_key` varchar(15) COLLATE utf8_unicode_ci NOT NULL,
  `reset_expires` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `deleted` tinyint(1) NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ;

--
-- Dumping data for table `users`
--

INSERT INTO `users` (`ID`, `username`, `password_hash`, `password_salt`, `email`, `active`, `admin`, `banned`, `reset_key`, `reset_expires`, `deleted`) VALUES
(1, 'michael', '5f4dcc3b5aa765d61d8327deb882cf99', '', 'mkpeacock@gmail.com', 1, 0, 0, '', '0000-00-00 00:00:00', 0),
(2, 'rich__t', '5f4dcc3b5aa765d61d8327deb882cf99', '', '', 1, 0, 0, '', '2010-04-01 00:19:39', 0);

Average elapsed days between two dates at customer level

I'm working on a way to average elapsed days between two dates on a customer level in SQL Server.

SELECT
CUSTOMER,
VISIT_ID,
DAYIN_DATE,
DAYOUT_DATE,
RANK () OVER (PARTITION BY CUSTOMER ORDER BY VISIT_ID DESC) AS RANKING,
LAG(DAYOUT_DATE, 1)
OVER (PARTITION BY CUSTOMER ORDER BY VISIT_ID DESC) AS DATEOUT_DT,
LEAD(DAYIN_DATE, 1)
OVER (PARTITION BY CUSTOMER ORDER BY VISIT_ID DESC) AS DATEIN_DT,
(DATEIN_DT - DATEOUT_DT) AS LATENCY
FROM VISIT_TABLE
GROUP BY 1,2,3,4;

This code takes the customer and visit(unique id), then ranks them by the DAYIN_DATE/DAYOUT_DATE. I've written a Lag and Lead function to help calculate the latency but, when averaged it doesn't average on a customer level.

What am I missing?

Select multiple rows with multiple matching id(s)

Is there a way to query data to display multiple rows with multiple selector?

like:

SELECT * FROM `book` WHERE `book`.id = 1 AND `book`.id = 2;

table:

id name
1  book1
2  book2
3  book3
4  book4

I dun suppose looping for each id is favourable.

How to avoid nested query in PostgreSQL

I managed to write this query:

INSERT INTO UserRoles (UserId, RoleId)
VALUES (@UserId, (SELECT Id FROM Roles WHERE Name = @Name));

And it works as intended. The problem is that I feel like this is a bad approach to write nested queries (here - find the id of a role in Roles table that matches the specified name). Is it possible to make this without the nested query?

Experienced programmers may laugh, but I am trying to learn this and I don't know how could I make this query better. I would be very grateful for your advice.

Converting DOB fromat

I have a table called NAMES that has a DOB column in nvarchar which for some reason will not convert to date, which is fine as long as I can run a snippet of code to convert it which is actually preferable. So here is the dilemma... some dates are m/d/yyyy and some are mm/dd/yyyy .

Examples:

2/4/1986
10/8/1999
4/24/1988
12/13/1975

I'd like to run a snippet of code to convert everything to a standard mm/dd/yyyy format. Thank you in advance...

Is this code in a stored procedure sql-injectible

If I have a where clause in a stored procedure that concatenates a search string that comes from the front end (user supplied) ... like this:

FirstName like '%'+@SearchString+'%'

Can this me injected with malicious code?

How can i migrate Postgres 9.3.5 to 9.4.4 after deleting the 9.3.5 oldbindir?

I'm trying to migrate from Postgres 9.3.5 to 9.4.4 and in doing so I've removed the old 9.3.5 install of Postgres (running brew cleanup too early did this i think). Whilst following various sources info on how to upgrade I realise that in order to pg_upgradeI need the oldbindir to do so.

Is there any way around this and if so how. If not how can i reinstall the old version in the right way into the right location in order to then run pg_upgrade?

Some context to help you understand where I am:

brew info postgres

postgresql: stable 9.4.4 (bottled)
Object-relational database system
http://ift.tt/1cHT2tT
Conflicts with: postgres-xc
/usr/local/Cellar/postgresql/9.4.4 (3014 files, 40M) *
  Poured from bottle
From:     http://ift.tt/1CBlqEP
==> Dependencies
Required: openssl ✔, readline ✔
==> Options
--32-bit
    Build 32-bit only
--with-dtrace
    Build with DTrace support
--with-python
    Build with python support
--without-perl
    Build without Perl support
--without-tcl
    Build without Tcl support
==> Caveats
If builds of PostgreSQL 9 are failing and you have version 8.x installed,
you may need to remove the previous version first. See:
  http://ift.tt/1h8DBZx

greps return the following so server seems not to be running locally:

ps aux | grep postgres
501  1228   367   0 10:37am ttys000    0:00.01 grep postgres

ps aux | grep pgsql
501  1260   367   0 10:42am ttys000    0:00.01 grep pgsql

In /usr/local/Cellar/postgresql I have only; 9.4.4

In /usr/local/var I currently have:

postgres.old - where all of my old databases reside. postgres.old2 - unsure of where this came from tbh, I had to chown it to get to look at contents, only server.log inside. postgres9.4. - Full seemingly as it should be after having been initdb'd. 'postgres' - Empty, think this has been created in a failed pg_upgrade attempt.

Any help hugely appreciated, I've been on this for many hours now. Any more context info you require to help find the right solution out of this, just let me know. Thanks

Query from same table to extract different data

In a single table I have 3 columns. First defines a sector, second count and third amount. I need to extract 5 columns of data in the following manner. First column sector. Second and third to contains the values were amount is less than count and third and four to display were amount is more than count in the specific sectors. How should my query look?

Advanced mysql query which sort down specific records on result set irrespective of its default sorting?

I have a query which actually have a sorting using order by clause. i have a table like following...

user_id   user_name   user_age   user_state   user_points 
1         Rakul       30         CA           56
2         Naydee      29         NY           144
3         Jeet        40         NJ           43
.....

i have following query...

select * from users where user_state = 'NY' order by user_points desc limit 50;

This gives me the list of 50 people with most points. I wanted to give least preference to few people who's id's were known. Incase if i do not have enough 50 records then those id's should come in the last in the list. I do not want the users 2 and 3 to come on top of the list even though they have higher points... those people should come on the last of the list from the query. Is there any way to push specific records to last on result set irrespective of query sorting ?

Update Query in Farsi

I have problem with my query in sql server 2012

UPDATE STUDENTS SET نام = N'%[احسان]%' WHERE نام = N'%[علی]%'; 

I used this but result is 0 row affected . I have column نام and other columns with Farsi names .

Reading very large fixed(ish) width format txt files from SQL into R data.tables or likewise

I'm trying to read in (and eventually merge/link/manipulate) a series of large (~300M) and very large (~4G) fixed width files for eventual regressions, visualizations, etc., and am hitting some snags.

First, the format of the files themselves is odd - I'm guessing something SQL-y. The file format is referenced here: http://ift.tt/1NnUwWE . It's fixed width, but the last column seems to (sometimes?) cut off with an \r\n before the full fixed width is experienced for that column. For reading it in I've tried laf_open_fwf and data.table::fread, but they both seem to get confused. A sample file and the associated non-XML format descriptor is here. I can't even get the thing to read in properly with that goofy last column.

Second, file size is an issue. I know I have a lot of table manipulation to do, so I'm tempted to look at data.table... but I also believe data.table stores the entire object in RAM, and that's going to be problematic. LaF or ffdf or sqlite seem like options, though I'm new to them, and would need to cope with this file format issue first.

Some questions get at this general idea, suggesting LaF, ffbase or data.table are below...

Reading big data with fixed width

Quickly reading very large tables as dataframes in R

Speed up import of fixed width format table in R

... but none seems to (1) deal with this odd fixed-width-ish format or (2) move data eventually into data.tables, which seems like I'd like to try first. I thought about trying to open and rewrite them as well-formatted CSVs so data.table could handle them (my goofy hack through data.frames and back to csv feels ridiculous and unscalable, below). And the CSV export demonstates how confused the file gets, since the laf reader is strictly going by field length instead of adjusting based on where the /r/n is...

Currently I'm trying something like the below for starters. Help, if possible?

searchbasis.laf = laf_open_fwf("SEARCHBASIS.txt",
                               column_widths = c(12, 12, 12, 12, 10), 
                               column_names = c("SearchBasisID", "SearchID", "PersonID", "StopID", "Basis"),
                               column_types = rep("string",5),
                               trim = T)
searchbasis.dt = as.data.table(as.data.frame(laf_to_ffdf(searchbasis.laf)))
write.csv(searchbasis.dt, file="SEARCHBASIS.csv")

Troubles with using GROUP BY in SQL Query

I'm trying to write a propper SQL query in MS SQL Server. First of all, i have the following tables: Towns, Employees, Addresses. Almost every employee has Manager, whom ManagerID is foreign key in Employees also. (Self relation). My goal is to display the number of managers from each town. So far i have this code:

SELECT t.Name, COUNT(*) AS [Managers from each town] FROM Towns t
JOIN Addresses a
ON t.TownID = a.TownID
JOIN Employees e
ON a.AddressID = e.AddressID
GROUP BY t.Name
ORDER BY [Managers from each town] DESC

This query returns the number of Employees, from each town, not Managers. If i try the second query bellow, I get something totally wrong:

SELECT t.Name, COUNT(*) AS [Managers from each town] FROM Towns t
JOIN Addresses a
ON t.TownID = a.TownID
JOIN Employees e
ON a.AddressID = e.AddressID
JOIN Employees m
ON e.ManagerID = m.ManagerID

GROUP BY t.Name
ORDER BY [Managers from each town] DESC

Here is the structure of 'Employees' table:

EmployeeID, FirstName, LastName, MiddleName, JobTitle,DepartamentID, ManagerID, HireDate, Salary, AddressID

The correct query must return this result set:

Town  | Managers from each town
Issaquah |  3
Kenmore  | 5
Monroe |    2
Newport Hills | 1

SQL Check if value is beetween two different values from another table

I have a table like this in my Oracle database

ID
------
10 000
25 000
40 000
61 000
75 000

I need to check if value, for example 46 567, is between some of two neighbour values from this table.

46 567 is between 40 000 and 61 000, I should choose 40 000 because it is closer to 46 567 than 61 000.

Sure I can find top and low borders like this

--top border
SELECT MIN(ID) FROM (SELECT * FROM this_table WHERE ID > 46567);
--low border
SELECT MAX(ID) FROM (SELECT * FROM this_table WHERE ID < 46567);

But one cool pro told me I should perform some hierarchical query and It will looks really good, I don't see how can I use hierarchical query to solve this problem. It should looks like this Fibonacci SQL, but I can't implement this.

WITH
numbers (n, prev) AS (
SELECT 0 n, 1 prev from dual
UNION ALL
SELECT n + prev, n from numbers
WHERE n + prev < 50
)

Can you help me?

how to make Wildcards consider spaces between

I'm working with a table with three columns( ID | NOTE |TAG)

ID- primary key

NOTE- some things

TAG- contains values separated with space

imagine that id-1 contains TAG- computer javaScript java forton cellphone

and id-2 contains TAG- computer java forton c++ android

When using a Wildcard to search for term java which is in middle of a string the SQLITE also shows the row containing javascript

My query " LIKE '%" + "java" + "%'"; this shows the rows if the word java is found anywhere inside (i.e) it shows java as well as javaScript as well as Scriptinjava

how to make it show only java ??

What is the equivalent of .moveToPosition(Java) in swift while using FMDB

I have queried a database and so far all the code transfers over well from java to swift except for the one method Cursor.moveToPosition(int). This method is pretty important as my SQL statement brings back a number of rows as the result and I want to pick just one (at random using arc4random_uniform(returnCount)) to use.

Basic question: How can i pick one, not just the first one, from the retrieved results of an sql query in swift- like how java does with its cursor.

Which is better when using the entity framework: using linq (or lambda expressions) for all transactions or all stored procedures

I love the entity framework. I also hate stored procedures; so I code all my applications using lambda expressions ...

I never use cursors and 99% of my transactions are CRUD. Other than precompilation do stored procedures via the entity framework provide me any benefits that anyone can suggest?

How to divide between rows in one table in Postgres

I am working in Posgres with a table like this:

mon   yyyy   weather

Apr   2014   78.45

Apr   2015   77.32

May   2014   79.56

May   2015   78.43

I would like to be able to query some results, ordered by "mon", where the Weather column values are divided according to year-on-year by month.

In other words I want to query weather where Apr 2015 is divided by Apr 2014.

However, I would like to write the query in such a way that I do not have to specify Month or year, and the query automatically divides Weather values according to: Apr 2015/Apr 2014, then May 2014/May 2014 without having to key in every month and every year, which is laborious.

I have the following code, but this expands columns which is not what I want:

Blockquote

select (select "Weather" from yoy 
        where mon = 'Apr' and yyyy = '2015'
     )/(select "American" from yoy 
        where mon = 'Apr' and yyyy = '2014'
     ) as "weather_apr",
     (select "Weather" from yoy 
      where mon = 'May' and yyyy = '2015'
     )/(select "Weather" from yoy 
        where mon = 'May' and yyyy = '2014'
     ) as "weather_may",
from yoy

Blockquote

Any help with writing such a query would be much appreciated, thanks!

insert from multiple tables into other

I would like to insert data from many tables into just one, I'm trying this but it's not working, what is the correct way of doing this?

INSERT INTO geo_lists AS a
LEFT JOIN en AS b
LEFT JOIN fr AS c
LEFT JOIN de AS d
LEFT JOIN zh_cn AS e
LEFT JOIN es AS f
LEFT JOIN ru AS g
LEFT JOIN pt_br AS h
VALUES
    a.city_id = b.geoname_id,
    a.continent_code = b.continent_code,
    a.continent_name = b.continent_name,
    a.country_code = b.country_iso_code,
    -- countries
    a.en_country_name = b.country_name,
    a.fr_country_name = c.country_name,
    a.de_country_name = d.country_name,
    a.zh_country_name = e.country_name,
    a.es_country_name = f.country_name,
    a.ru_country_name = g.country_name,
    a.pt_country_name = h.country_name,
    -- states
    a.en_state_name = b.subdivision_1_name,
    a.fr_state_name = c.subdivision_1_name,
    a.de_state_name = d.subdivision_1_name,
    a.zh_state_name = e.subdivision_1_name,
    a.es_state_name = f.subdivision_1_name,
    a.ru_state_name = g.subdivision_1_name,
    a.pt_state_name = h.subdivision_1_name,
    -- provinces
    a.en_province_name = b.subdivision_2_name,
    a.fr_province_name = c.subdivision_2_name,
    a.de_province_name = d.subdivision_2_name,
    a.zh_province_name = e.subdivision_2_name,
    a.es_province_name = f.subdivision_2_name,
    a.ru_province_name = g.subdivision_2_name,
    a.pt_province_name = h.subdivision_2_name,
    -- cities
    a.en_city_name = b.city_name,
    a.fr_city_name = c.city_name,
    a.de_city_name = d.city_name,
    a.zh_city_name = e.city_name,
    a.es_city_name = f.city_name,
    a.ru_city_name = g.city_name,
    a.pt_city_name = h.city_name,
    a.time_zone = b.time_zone;

To store especial characters from different languages, the correct charset and collation is utf8 and utf8_general_ci? Or do I need to change it?

How to merge two table using order by?

While trying to merge two tables, when rows not matched how do I insert rows based on an order. For example in table_2 I have a column "Type" (sample values 1,2,3 etc), so when I do an insert for unmatched codes I need to insert records with type as 1 first, then 2 etc.

So far I tried below code

WITH tab1 AS
(
select * From TABLE_2 order by Type
)
merge  tab1 as Source using TABLE_1 as Target on Target.Code=Source.Code 
when matched then update set Target.Description=Source.Description
when not matched then insert (Code,Description,Type) 
values (Source.Code,Source.Description,Source.Type);

But I get "The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified." error because of using order by in sub query.

So how do I insert records based on an order while merging two table?

Thanks in advance.

SQL not recognizing unicode enum value

I have a table that uses enums with special characters created as such:

create table test(test enum('Não informado',
'A selecionar',
'Em contratação',
'Ação Preparatória',
'Em licitação de obra',
'Em licitação de projeto',
'Em obras',
'Em execução',
'Concluído',
'Em operação'))

And I'm trying to add a row to it:

insert into test(test)
values('Concluído')

but SQL is giving me the error

Error Code: 1265. Data truncated for column 'test' at row 1

What am I doing wrong? How can I fix this?

I am running a MySQL server version 5.6.23-log MySQL Community

group by data in left join

I have two tables: orders and product_orders there are one id in orders table. i.e: 9 and order_id(as foreign key) in product_orders repeated 3 times. So I want this three entry in product_orders table with one entry in orders table. My query is:

SELECT orders.*,order_products.* FROM orders LEFT JOIN order_products ON orders.id=order_products.order_id

this gives :

    Array
(
    [0] => Array
        (
            [orders] => Array
                (
                    [id] => 9
                    [name] => Abdus Sattar Bhuiyan
                    [email] => sattar.kuet@gmail.com
                    [mobile] => 01673050495
                    [alt_mobile] => 01818953250
                    [city_id] => 2
                    [location_id] => 5
                    [status] => No contact
                    [chashed] => NO
                    [created] => 2015-06-27 12:49:34
                    [modified] => 2015-06-27 12:49:34
                    [comment] => 
                )

            [order_products] => Array
                (
                    [id] => 2
                    [order_id] => 9
                    [product_id] => 1
                    [pieces] => 1
                )

        )

    [1] => Array
        (
            [orders] => Array
                (
                    [id] => 9
                    [name] => Abdus Sattar Bhuiyan
                    [email] => sattar.kuet@gmail.com
                    [mobile] => 01673050495
                    [alt_mobile] => 01818953250
                    [city_id] => 2
                    [location_id] => 5
                    [status] => No contact
                    [chashed] => NO
                    [created] => 2015-06-27 12:49:34
                    [modified] => 2015-06-27 12:49:34
                    [comment] => 
                )

            [order_products] => Array
                (
                    [id] => 3
                    [order_id] => 9
                    [product_id] => 2
                    [pieces] => 1
                )

        )

    [2] => Array
        (
            [orders] => Array
                (
                    [id] => 9
                    [name] => Abdus Sattar Bhuiyan
                    [email] => sattar.kuet@gmail.com
                    [mobile] => 01673050495
                    [alt_mobile] => 01818953250
                    [city_id] => 2
                    [location_id] => 5
                    [status] => No contact
                    [chashed] => NO
                    [created] => 2015-06-27 12:49:34
                    [modified] => 2015-06-27 12:49:34
                    [comment] => 
                )

            [order_products] => Array
                (
                    [id] => 4
                    [order_id] => 9
                    [product_id] => 3
                    [pieces] => 1
                )

        )

)

My expected result is:

Array
    (
        [0] => Array
            (
                [orders] => Array
                    (
                        [id] => 9
                        [name] => Abdus Sattar Bhuiyan
                        [email] => sattar.kuet@gmail.com
                        [mobile] => 01673050495
                        [alt_mobile] => 01818953250
                        [city_id] => 2
                        [location_id] => 5
                        [status] => No contact
                        [chashed] => NO
                        [created] => 2015-06-27 12:49:34
                        [modified] => 2015-06-27 12:49:34
                        [comment] => 
                    )

                [order_products] => Array
                    (
                          [0] => Array(
                                   [id] => 2
                                   [order_id] => 9
                                   [product_id] => 1
                                   [pieces] => 1
                                )
                           [1] => Array(
                                   [id] => 3
                                   [order_id] => 9
                                   [product_id] => 2
                                   [pieces] => 1
                                )   
                             [2] => Array(
                                  [id] => 4
                                  [order_id] => 9
                                  [product_id] => 3
                                  [pieces] => 1
                                ) 

                    )

            )
    )

    I used GROUP BY orders.id. But no luck.  

SQL DELETE WHERE GROUP BY ... HAVING

Would there be a way to make this SQL statement to work...

DELETE AHsTransactions WHERE SerialNumber <> 0 GROUP BY TypeID, AH1ID, AH2ID HAVING COUNT(*) > 1

Incorrect syntax near the keyword 'GROUP', so seems impossible in just one SQL statement.

Fifo Method in SQL

I have to implement something similar of a FIFO method in SQL. The thing is that I have points issued in different months and at the same time I have points exchanged in other date. The thing is that I have to know how many months ago were issued the points until the exchange. So the first points to be exchanged are the olders, but if 200 points were issued the 30/06/2014, and 100 points were issued the 31/07/2014, and the 31/08/2014 I exchanged 250 points. 200 points have 2 months of development (the 200 of June) and 50 points have one month of development (50 of 100 issued in July). How can I code this?

I leave some table to better understanding!

Thanks

Date of issue   Number of account   Issued points
30-abr                    1            300
31-may                    1             50
30-jun                    1            100
30-jun                    2            100
30-jun                    3            120
31-may                    4             20
30-jun                    4            200



Date of exchange    Number of account   Exchanged points
30-jun                     1                  250
31-jul                     1                  200
31-jul                     3                   30
30-jun                     4                   30
31-jul                     2                   10
30-jun                     3                   30
31-ago                     4                   10

And the final table I need is.

Issue date  Months after issue  Exchanged points
30-abr           2,00               250,00
30-abr           3,00                50,00
31-may           2,00                50,00
31-may           1,00                20,00
30-jun           1,00               100,00
30-jun           1,00                30,00
30-jun           0,00                10,00
30-jun           1,00                10,00
30-jun           0,00                30,00
30-jun           2,00                10,00

Edit: Adding example to clarify the problem with account 1:

Points issued in 3 different months:

Date of issue   Points
30-apr          300
31-may           50
30-jun          100

These are exchanged in 2 occasions:

Date of exchange    Points
30-jun              250
31-jul              200

Since the exchange is happening from points issued in different months, the result should be 4 actions, so that the earliest points are used first:

Date of exchange    Date of issue   Points    Months
30-jun              30-apr          250       2
31-jul              30-apr           50       3
31-jul              31-may           50       2
31-jul              30-jun          100       1

How can I pass arguments to a PL/SQL procedure, while invoking it from ant sql task?

I have an ant target like-

<target name="run-patched-sql-file">
    <path id="antclasspath">
        <fileset dir="${weblogic.server.dir}/server/lib">
            <include name="*ojdbc6.jar"/>
        </fileset>
    </path>
    <property name="pathvar" refid="antclasspath"/>
    <filelist id="sql-files" dir="../../db/scripts/oracle/">
            <file name="scripts/scriptToExecute.sql"/>
    </filelist>
    <sql driver="${operationsDB.driver}" url="jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=${operationsDB.host}) (PORT=${operationsDB.port}))(CONNECT_DATA=(SERVICE_NAME=${operationsDB.serviceName})))" userid="${operationsDB.user}" password="${OIM.DBPassword}" delimiter="/" delimitertype="row" keepformat="yes" onerror="continue" caching="true" escapeprocessing="no" classpathref="antclasspath">
        <path>
            <filelist refid="sql-files"/>
        </path>
    </sql>
</target>

Now scriptToExecute.sql expects an argument. How can we pass this argument to sql script from ant task.

How to Round up timestamp to number of Days?

PreparedStatement psnmt=con.prepareStatement("SELECT (?)-(?) as DiffDate FROM dual");
psnmt.setTimestamp(1,ctenderdate);
psnmt.setTimestamp(2,btenderdate);
ResultSet resrt=psnmt.executeQuery();
if(!resrt.next())
{
     out.println("No Records Found");
}
else
{
  do
     {
        datediff=resrt.getString("DiffDate");
     }
  while(resrt.next()); 
  System.out.println("the no of days Difference"+datediff);

}

ctenderdate=2015-06-27 00:00:00.0

btenderdate=2015-06-29 00:00:00.0

datediff=1 10:18:51.940000000

Expected datediff=2

How to round it off datediff to number of days

Can I use raw sql insert or update in Sails?

According to the documentation, http://ift.tt/1QVYk7K, I'm able to use query() method to use native sql 'queries', but I'm confused whether insert or update is considered as 'queries' because some people use the term 'queries' to mean insert or update in sql.

SQL query who returns the number of straight victory

I have the following table :

match(id, userId1, userId2, userIdWinner, date)

I would like to get the number of straight victory for a specific user.

If you have some ideas to do that...

Thx :)

rails join query between two tables with similar field

I have 3 models

class Company < ActiveRecord::Base
  has_many : CompanyAccount
  has_many : CompanyContact
end

class CompanyContact < ActiveRecord::Base
  belongs_to : Company
end

class CompanyAccount < ActiveRecord::Base
  belongs_to : Company
end

As both the CompanyAccount and CompanyContact models belong to the Company model, they have a similar "company_id" field. I have retrieved some Accounts through a query:

@CompanyAccounts = CompanyAccount.where.not(balance:nil)

Now, using the common company_id field I am trying to retrieve all the data from my CompanyContacts table that belong to the same Company associated with the CompanyAccounts I queried above (in other words, I am trying to get the rows which have the same company_id). I have made several attempts using "joins" but everything failed so far. Could anyone give me what would be the appropriate syntax in this context? Thanks.

How can I get my C# and MySql to return more than one entry from the database?

This is an excerpt from my code. Everything works the way it's supposed to, but I'm trying to add a feature which allows the program to return more than one row from the database - that is to say, more than one result.

If I search for "silver", I get Silverado; if I search "silver l", I get Silver Linings Playbook.

What I've been trying to get it to do is to search for "silver" and get Silverado AND Silver Linings Playbook, but my loops haven't fixed the issue and some help would be...helpful.

            movieBox.Items.Clear();
            try
            {
            db_connection.Open();
            sql_command = new MySqlCommand("select * from mov_db.movies where title like '%" + searchBox.Text + "%'", db_connection);
            sql_reader = sql_command.ExecuteReader();
            if (sql_reader.Read())
                {
                movieBox.Items.Add(sql_reader.GetString("title"));
                movieBox.Items.Add(sql_reader.GetString("year"));
                movieBox.Items.Add(sql_reader.GetString("genre"));
                movieBox.Items.Add(sql_reader.GetString("rating"));
                }
            else
                {
                MessageBox.Show("Sorry, but that title is unavailable.");
                }
            }

MariaDB and 1064 error

I get the following error when I execute the code below:

ERROR 1064 (42000) at line 21: You have an error in your SQL syntax; chech the manual that corresponds to your MariaDB server version for the right syntax to use near @LINE_TERMINATION@

Here is the code that I use to create table:

CREATE TABLE SRDEF (
    RT  VARCHAR (3) BINARY NOT NULL,
    UI  CHAR (4) BINARY NOT NULL,
    STY_RL  VARCHAR (41) BINARY NOT NULL,
    STN_RTN VARCHAR (14) BINARY NOT NULL,
    EX  VARCHAR (185) BINARY
) CHARACTER SET utf8;

And here is the code I use to populate table:

load data local infile 'SRDEF' into table SRDEF fields terminated by '|' 
ESCAPED BY '' lines terminated by @LINE_TERMINATION@
(@rt, @ui, @sty_rl, @stn_rtn, @ex)
SET RT = @rt,
UI = @ui,
STY_RL = @sty_rl,
STN_RTN = @stn_rtn,
EX = NULLIF(@ex,'');

Any advice is greatly appreciated.

Entity Framework - code first - Too many navigation properties

I have two tables created with Entity Framework code first that I would like some help with..!

Tables

  1. AccountLinks, 3 composite keys
  2. Guest, 3 composite foreign keys (?)

Table overview

enter image description here

SQL overview

enter image description here

As you can see I have ALOT of navigation properties in my database which I dont want.

Code for AccountLink

public class AccountLink
    {
        public AccountLink()
        {
            AccountLinkPermissionAccountLinkID = new HashSet<AccountLinkPermission>();
            AccountLinkPermissionAccountOwnerID = new HashSet<AccountLinkPermission>();
            AccountLinkPermissionGuestID = new HashSet<AccountLinkPermission>();
        }

        public AccountLink(int accountOwnerID, int guestID, DateTime dateCreated, DateTime dateStart, DateTime dateExpires)
        {
            AccountLinkPermissionAccountLinkID = new HashSet<AccountLinkPermission>();
            AccountLinkPermissionAccountOwnerID = new HashSet<AccountLinkPermission>();
            AccountLinkPermissionGuestID = new HashSet<AccountLinkPermission>();
            this.AccountOwnerID = accountOwnerID;
            this.GuestID = guestID;
            this.DateCreated = dateCreated;
            this.DateStart = dateStart;
            this.DateExpires = dateExpires;
        }

        [Key, Column(Order = 0)]
        public int AccountLinkID { get; set; }
        [Key, Column(Order = 1)]
        public int AccountOwnerID { get; set; }
        [Key, Column(Order = 2)]
        public int GuestID { get; set; }

        public DateTime DateCreated { get; set; }
        public DateTime DateStart { get; set; }
        public DateTime DateExpires { get; set; }

        [ForeignKey("AccountOwnerID")]
        public virtual AccountOwner AccountOwner { get; set; }

        [ForeignKey("GuestID")]
        public virtual Guest Guest { get; set; }

        public virtual ICollection<AccountLinkPermission> AccountLinkPermissionAccountLinkID { get; set; }
        public virtual ICollection<AccountLinkPermission> AccountLinkPermissionAccountOwnerID { get; set; }
        public virtual ICollection<AccountLinkPermission> AccountLinkPermissionGuestID { get; set; }
    }

Code for AccountLinkPermissions

public class AccountLinkPermission
    {
        public AccountLinkPermission()
        {

        }

        public AccountLinkPermission(int accountLinkID, int accountOwnerID, int guestID, int permissionID)
        {
            this.AccountLinkID = accountLinkID;
            this.AccountOwnerID = accountOwnerID;
            this.GuestID = guestID;
            this.PermissionID = permissionID;
        }

        [Key, Column(Order = 0)]
        public int AccountLinkID { get; set; }
        [Key, Column(Order = 1)]
        public int AccountOwnerID { get; set; }
        [Key, Column(Order = 2)]
        public int GuestID { get; set; }
        [Key, Column(Order = 3)]
        public int PermissionID { get; set; }

        [InverseProperty("AccountLinkPermissionAccountLinkID")]
        public virtual AccountLink AccountLink { get; set; }

        [InverseProperty("AccountLinkPermissionAccountOwnerID")]
        public virtual AccountLink AccountLinkAccountOwner { get; set; }

        [InverseProperty("AccountLinkPermissionGuestID")]
        public virtual AccountLink AccountLinkGuest { get; set; }

        [ForeignKey("PermissionID")]
        public virtual Permission Permission { get; set; }
    }

The reason to why I want 3 composite keys is because I want to prevent duplicates.

Why I use InverseProperty instead of ForeignKey

Because I'm using multiple foreign keys linked to the same table, EF is not able to determine by convention which navigation properties belong together. Instead of using the property [ForeignKey] I have to use [InverseProperty] which defines the navigation property on the other end of the relationship.

What I need help with

How do I remove all the navigation properties in my database using code first? I know I messed it up somewhere but that's all I know :)

1 Navigation property in AccountLinks, (User_UserID)

9 Navigtion properties in AccountLinkPermissions

Bonus question

In AccountLink table I have three composite keys, AccountLinkID, AccountOwnerID and GuestID. Is it possible to put auto increment, (identity seed), on AccountLinkID? How would I do that in EF code first?

SELECT column WHERE time_type = 'Break' but only the rows that are after(below) time_type = 'Start'

currently I'm stuck in an issue, hope some good PostgreSQL fellow programmer could give me a hand with it. This is my table...

enter image description here

I would like to SELECT all 'time_elapse' WHERE time_type = 'Break' but only the rows that are after(below) the last(descendent) time_type = 'Start' and sum them up.

So in the table above I would SELECT...

time_elapse           |   time_type  |  time_index
----------------------+--------------+-------------
00-00-00 01:00:00.00  |   Break      |  2.1
00-00-00 01:00:00.00  |   Break      |  2.2

So totalbreak = 00-00-00 02:00:00.000

I know how to convert character varying to timestamp in order to sum them up (please don't bother with that, I'm not looking for help with that, let's image 'time' and 'time_elapse' columns are proper timestamps), I just don't know how would be the syntax to select all possible 'Breaks' and sum them up (lets say the max Breaks between each 'Start' is nine).

I can hardly imagine a way to do that, so I would like to ask for suggestions.

Msg 512, Level 16, State 1, Procedure trg_pricebase, Line 13 Subquery returned more than 1 value

how to solve this kind of probe help me please:

error:

Msg 512, Level 16, State 1, Procedure trg_pricebase, Line 13 Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression. The statement has been terminated.


and my trigger is :

ALTER TRIGGER [dbo].[trg_pricebase]
ON  [dbo].[tbl_model2]        
AFTER UPDATE
AS 
   BEGIN

                DECLARE @price_base NVARCHAR(50) = (SELECT tbl_model2.price_base FROM tbl_model2)
                DECLARE @tipid  int = (SELECT tbl_model2.id FROM tbl_model2)


                INSERT INTO tbl_price_history (tbl_price_history.price_base,tbl_price_history.tipid)
                VALUES (@price_base, @tipid )

    END

ordering sql results from IN() function

I have a query:

SELECT categories_id
     , categories_name
     ,categories_url
FROM categories
WHERE categories_id IN(0,72,51,52)

For the IN function I have the numbers 0,72,51,52, but when I get the sql results they are 0,51,52,72, i.e. chronological. How can I order them the same way there were originally as 0,72,51,52?

Base64 encoded image string into Oracle BLOB database

I am tasked with taking a base64 encoded image and storing it into an Oracle SQL database as a blob.

I have the string of the base64 encoded image but it doesn't look like what gets exported from the sql database column blob.

When I export the blob from the database into an xml it looks like so:

image="FFD8FFE000104A46494600010100000100010000FFFE003B43524541544F523A2067642D6A7065672076312E3020287573696E6720494A47204A50454720763830292C207175616C697479203D2039300AFFDB0043000302020302020303030304030304050805050404050A070706080C0A0C0C0B0A0B0B0D0E12100D0E110E0B0B101610111314...."

Does anyone know what conversion this is? or is this type of conversion possible in C#?

Any help is greatly appreciated, or just a push in the right direction would be awesome also!

Thanks,

errno: 150 "Foreign key constraint is incorrectly formed"

I'm getting the 150 error when attempting to run the following setup script. Whats wrong?

serverPermissions seems to trigger the error but I'm not exactly sure why. I think the foreign key constraints are all properly mapped as far as I can tell. I am probably missing something obvious.

-- Setup my members database
DROP DATABASE IF EXISTS mymembers;
CREATE DATABASE IF NOT EXISTS mymembers;

/*
* The list of all members in the community
*/
DROP TABLE IF EXISTS members;
CREATE TABLE members
(
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(30),
steamID3 INT NOT NULL UNIQUE,
PRIMARY KEY (id)
);

/*
* Lists all servers in operation
*/
DROP TABLE IF EXISTS servers;
CREATE TABLE servers
(
sid INT NOT NULL AUTO_INCREMENT,
sname VARCHAR(30), -- name of server (short name)
PRIMARY KEY(sid)
);

/*
* Maps a member and a server together along with a flag. 
* Record (0, 12, 1, 0) indicates member 12 has flag 0 on server 1.
*/
DROP TABLE IF EXISTS serverPermissions;
CREATE TABLE serverPermissions
(
mid INT,
sid INT,
flagid INT,
PRIMARY KEY(mid, sid, flagid),
FOREIGN KEY (mid) REFERENCES members(id),
FOREIGN KEY (sid) REFERENCES servers(sid),
FOREIGN KEY (flagid) REFERENCES flags(id)
);

/*
* flags are used to give custom permissions to players.
* For example a record may be: (0, "VIP", "This play is very important")
*/
DROP TABLE IF EXISTS flags;
CREATE TABLE flags
(
id INT NOT NULL AUTO_INCREMENT,
name VARCHAR(10),
description VARCHAR(100),
PRIMARY KEY(id)
);

Why does my SSIS package takes so long to execute?

I am fairly new creating SSISs; I have this SQL Server 2008 table called BanqueDetailHistoryRef containing 10,922583 rows.

I want to extract the rows that were inserted on a specific date (or dates) and insert them on a table on another server. I am trying to achieve this through a SSIS which diagram looks like this:

OLEDB Source (the table with the 10Million+ records) --> Lookup --> OLEDB Destination

On the look up I have set: enter image description here enter image description here enter image description here

Now, the query (specified on the Lookup transformation):

SELECT * FROM BanqueDetailHistoryRef WHERE ValueDate = '2014-01-06';

Takes around 1 second to run through SQL Server Management Studio, but the described SSIS package is taking really long time to run (like an hour).

Why is causing this? Is this the right way to achieve my desired results?

Insight.Database procedure call using ODBC fails

I am using the ODBC provider to connect to a iSeries DB2 database. All the following calls work great!:

List<Beer> beers = conn.QuerySql<Beer>("SELECT * FROM Beer WHERE Typee = @Typee", new { Typee = "IPA" }).ToList();

var beer = new Beer { ID=41,Typee="Medium", Description = "From dotNet Neither light or dark"};

conn.ExecuteSql("INSERT INTO Beer VALUES (@ID, @Typee, @Description)", new { ID = 4, Typee = "Medium", Description = "From dotNet Neither light or dark" });

conn.ExecuteSql("CALL INSERTBEER (@ID, @Typee, @Description) ", new { ID = 4, Typee = "MediumOD", Description = "From dotNet Neither light or dark" });

conn.ExecuteSql("INSERT INTO Beer VALUES (@ID, @Typee, @Description)", beer);

Which is awesome, I am really digging this micro-ORM. But I dont want to type out the LHS parameters in the call to the InsertBeer stored procedure. I believe that is the type of plumbing work that can be avoided with the Execute() procedure. Sadly, these calls DONT work:

conn.Execute("INSERTBEER", beer);

I also tried:

conn.Execute("insertbeer", new Beer {ID = 4, Typee = "Medium", Description = "From dotNet Neither light or dark"}, CommandType.StoredProcedure, true, Int32.MaxValue, null, beer);   

The error I am getting is:

{"ERROR [42000] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0104 - Token INSERTBEER was not valid. Valid tokens: ( CL END GET SET CALL DROP FREE HOLD LOCK OPEN WITH."}

Any ideas greatly appreciated!

Improve on How to get the number of occourence for each distinct Value of Group Concat Mysql

I am trying to fiddle around this answer to get a more improved output. I have a table like this

name |status
-------------
mike |yes
mike |yes
mike |no
mike |ney
john |no
john |ney
john |yes

to output something like this

name |status           |total
------------------------------
mike |yes-2,no-1,ney-1 | 4
john |yes-1,no-1,ney-1 | 3

Someone suggested this answer that works great.

SELECT name, GROUP_CONCAT(totalPerStatus) AS status, 
       (SELECT COUNT(*) FROM mytable WHERE name = t.name) AS total
FROM (
  SELECT name,      
         CONCAT(status, '-', COUNT(*)) AS totalPerStatus            
  FROM mytable
  GROUP BY name, status ) t
GROUP BY name;

But I want to improve on this output to get something like this

name | yes | no | ney | total
------------------------------
mike |2    |1   |1    | 4
john |1    |1   |1    | 3